00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2431 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3692 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.065 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.065 The recommended git tool is: git 00:00:00.065 using credential 00000000-0000-0000-0000-000000000002 00:00:00.077 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.098 Fetching changes from the remote Git repository 00:00:00.102 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.136 Using shallow fetch with depth 1 00:00:00.136 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.136 > git --version # timeout=10 00:00:00.170 > git --version # 'git version 2.39.2' 00:00:00.170 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.197 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.197 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.452 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.463 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.474 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.474 > git config core.sparsecheckout # timeout=10 00:00:04.484 > git read-tree -mu HEAD # timeout=10 00:00:04.499 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.522 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.522 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.648 [Pipeline] Start of Pipeline 00:00:04.660 [Pipeline] library 00:00:04.661 Loading library shm_lib@master 00:00:04.661 Library shm_lib@master is cached. Copying from home. 00:00:04.675 [Pipeline] node 00:00:04.687 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.688 [Pipeline] { 00:00:04.698 [Pipeline] catchError 00:00:04.699 [Pipeline] { 00:00:04.709 [Pipeline] wrap 00:00:04.717 [Pipeline] { 00:00:04.725 [Pipeline] stage 00:00:04.726 [Pipeline] { (Prologue) 00:00:04.742 [Pipeline] echo 00:00:04.744 Node: VM-host-SM38 00:00:04.749 [Pipeline] cleanWs 00:00:04.758 [WS-CLEANUP] Deleting project workspace... 00:00:04.758 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.764 [WS-CLEANUP] done 00:00:04.978 [Pipeline] setCustomBuildProperty 00:00:05.085 [Pipeline] httpRequest 00:00:05.735 [Pipeline] echo 00:00:05.737 Sorcerer 10.211.164.20 is alive 00:00:05.745 [Pipeline] retry 00:00:05.746 [Pipeline] { 00:00:05.758 [Pipeline] httpRequest 00:00:05.762 HttpMethod: GET 00:00:05.762 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.763 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.764 Response Code: HTTP/1.1 200 OK 00:00:05.764 Success: Status code 200 is in the accepted range: 200,404 00:00:05.765 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.356 [Pipeline] } 00:00:06.368 [Pipeline] // retry 00:00:06.374 [Pipeline] sh 00:00:06.649 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.660 [Pipeline] httpRequest 00:00:07.217 [Pipeline] echo 00:00:07.219 Sorcerer 10.211.164.20 is alive 00:00:07.231 [Pipeline] retry 00:00:07.234 [Pipeline] { 00:00:07.246 [Pipeline] httpRequest 00:00:07.250 HttpMethod: GET 00:00:07.250 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:07.251 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:07.252 Response Code: HTTP/1.1 200 OK 00:00:07.252 Success: Status code 200 is in the accepted range: 200,404 00:00:07.253 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:26.468 [Pipeline] } 00:00:26.492 [Pipeline] // retry 00:00:26.502 [Pipeline] sh 00:00:26.787 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:30.107 [Pipeline] sh 00:00:30.393 + git -C spdk log --oneline -n5 00:00:30.393 c13c99a5e test: Various fixes for Fedora40 00:00:30.393 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:30.393 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:30.393 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:30.393 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:30.415 [Pipeline] writeFile 00:00:30.431 [Pipeline] sh 00:00:30.716 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:30.729 [Pipeline] sh 00:00:31.051 + cat autorun-spdk.conf 00:00:31.051 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.051 SPDK_TEST_NVME=1 00:00:31.051 SPDK_TEST_FTL=1 00:00:31.051 SPDK_TEST_ISAL=1 00:00:31.051 SPDK_RUN_ASAN=1 00:00:31.051 SPDK_RUN_UBSAN=1 00:00:31.051 SPDK_TEST_XNVME=1 00:00:31.051 SPDK_TEST_NVME_FDP=1 00:00:31.051 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.059 RUN_NIGHTLY=1 00:00:31.061 [Pipeline] } 00:00:31.074 [Pipeline] // stage 00:00:31.089 [Pipeline] stage 00:00:31.091 [Pipeline] { (Run VM) 00:00:31.101 [Pipeline] sh 00:00:31.385 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:31.385 + echo 'Start stage prepare_nvme.sh' 00:00:31.385 Start stage prepare_nvme.sh 00:00:31.385 + [[ -n 1 ]] 00:00:31.385 + disk_prefix=ex1 00:00:31.385 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:31.385 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:31.385 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:31.385 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.385 ++ SPDK_TEST_NVME=1 00:00:31.385 ++ SPDK_TEST_FTL=1 00:00:31.385 ++ SPDK_TEST_ISAL=1 00:00:31.385 ++ SPDK_RUN_ASAN=1 00:00:31.385 ++ SPDK_RUN_UBSAN=1 00:00:31.385 ++ SPDK_TEST_XNVME=1 00:00:31.385 ++ SPDK_TEST_NVME_FDP=1 00:00:31.385 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.385 ++ RUN_NIGHTLY=1 00:00:31.385 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:31.385 + nvme_files=() 00:00:31.385 + declare -A nvme_files 00:00:31.385 + backend_dir=/var/lib/libvirt/images/backends 00:00:31.385 + nvme_files['nvme.img']=5G 00:00:31.385 + nvme_files['nvme-cmb.img']=5G 00:00:31.385 + nvme_files['nvme-multi0.img']=4G 00:00:31.385 + nvme_files['nvme-multi1.img']=4G 00:00:31.385 + nvme_files['nvme-multi2.img']=4G 00:00:31.385 + nvme_files['nvme-openstack.img']=8G 00:00:31.385 + nvme_files['nvme-zns.img']=5G 00:00:31.385 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:31.385 + (( SPDK_TEST_FTL == 1 )) 00:00:31.385 + nvme_files["nvme-ftl.img"]=6G 00:00:31.385 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:31.385 + nvme_files["nvme-fdp.img"]=1G 00:00:31.385 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:31.385 + for nvme in "${!nvme_files[@]}" 00:00:31.385 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:31.385 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.385 + for nvme in "${!nvme_files[@]}" 00:00:31.385 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:00:31.646 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:31.646 + for nvme in "${!nvme_files[@]}" 00:00:31.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:31.646 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.646 + for nvme in "${!nvme_files[@]}" 00:00:31.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:31.646 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:31.646 + for nvme in "${!nvme_files[@]}" 00:00:31.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:31.646 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.646 + for nvme in "${!nvme_files[@]}" 00:00:31.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:31.646 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.646 + for nvme in "${!nvme_files[@]}" 00:00:31.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:31.646 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.646 + for nvme in "${!nvme_files[@]}" 00:00:31.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:00:31.907 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:31.907 + for nvme in "${!nvme_files[@]}" 00:00:31.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:31.907 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.907 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:31.907 + echo 'End stage prepare_nvme.sh' 00:00:31.907 End stage prepare_nvme.sh 00:00:31.921 [Pipeline] sh 00:00:32.206 + DISTRO=fedora39 00:00:32.206 + CPUS=10 00:00:32.206 + RAM=12288 00:00:32.206 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:32.206 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:32.206 00:00:32.206 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:32.206 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:32.206 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:32.206 HELP=0 00:00:32.206 DRY_RUN=0 00:00:32.206 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:00:32.206 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:32.206 NVME_AUTO_CREATE=0 00:00:32.206 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:00:32.206 NVME_CMB=,,,, 00:00:32.206 NVME_PMR=,,,, 00:00:32.206 NVME_ZNS=,,,, 00:00:32.206 NVME_MS=true,,,, 00:00:32.206 NVME_FDP=,,,on, 00:00:32.206 SPDK_VAGRANT_DISTRO=fedora39 00:00:32.206 SPDK_VAGRANT_VMCPU=10 00:00:32.206 SPDK_VAGRANT_VMRAM=12288 00:00:32.206 SPDK_VAGRANT_PROVIDER=libvirt 00:00:32.206 SPDK_VAGRANT_HTTP_PROXY= 00:00:32.206 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:32.206 SPDK_OPENSTACK_NETWORK=0 00:00:32.206 VAGRANT_PACKAGE_BOX=0 00:00:32.206 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:32.206 FORCE_DISTRO=true 00:00:32.206 VAGRANT_BOX_VERSION= 00:00:32.206 EXTRA_VAGRANTFILES= 00:00:32.206 NIC_MODEL=e1000 00:00:32.206 00:00:32.206 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:32.206 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:34.122 Bringing machine 'default' up with 'libvirt' provider... 00:00:34.382 ==> default: Creating image (snapshot of base box volume). 00:00:35.326 ==> default: Creating domain with the following settings... 00:00:35.326 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733221684_a731c5b3c91db5ac2f16 00:00:35.326 ==> default: -- Domain type: kvm 00:00:35.326 ==> default: -- Cpus: 10 00:00:35.326 ==> default: -- Feature: acpi 00:00:35.326 ==> default: -- Feature: apic 00:00:35.326 ==> default: -- Feature: pae 00:00:35.326 ==> default: -- Memory: 12288M 00:00:35.326 ==> default: -- Memory Backing: hugepages: 00:00:35.326 ==> default: -- Management MAC: 00:00:35.326 ==> default: -- Loader: 00:00:35.326 ==> default: -- Nvram: 00:00:35.326 ==> default: -- Base box: spdk/fedora39 00:00:35.326 ==> default: -- Storage pool: default 00:00:35.326 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733221684_a731c5b3c91db5ac2f16.img (20G) 00:00:35.326 ==> default: -- Volume Cache: default 00:00:35.326 ==> default: -- Kernel: 00:00:35.326 ==> default: -- Initrd: 00:00:35.326 ==> default: -- Graphics Type: vnc 00:00:35.326 ==> default: -- Graphics Port: -1 00:00:35.326 ==> default: -- Graphics IP: 127.0.0.1 00:00:35.326 ==> default: -- Graphics Password: Not defined 00:00:35.326 ==> default: -- Video Type: cirrus 00:00:35.326 ==> default: -- Video VRAM: 9216 00:00:35.326 ==> default: -- Sound Type: 00:00:35.326 ==> default: -- Keymap: en-us 00:00:35.326 ==> default: -- TPM Path: 00:00:35.326 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:35.326 ==> default: -- Command line args: 00:00:35.326 ==> default: -> value=-device, 00:00:35.326 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:35.326 ==> default: -> value=-drive, 00:00:35.326 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:35.326 ==> default: -> value=-device, 00:00:35.326 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:35.326 ==> default: -> value=-device, 00:00:35.326 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:00:35.326 ==> default: -> value=-drive, 00:00:35.326 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:00:35.326 ==> default: -> value=-device, 00:00:35.326 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.326 ==> default: -> value=-device, 00:00:35.326 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:00:35.326 ==> default: -> value=-drive, 00:00:35.326 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:35.326 ==> default: -> value=-device, 00:00:35.326 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.326 ==> default: -> value=-drive, 00:00:35.326 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:35.326 ==> default: -> value=-device, 00:00:35.326 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.326 ==> default: -> value=-drive, 00:00:35.326 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:35.326 ==> default: -> value=-device, 00:00:35.326 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.326 ==> default: -> value=-device, 00:00:35.326 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:35.326 ==> default: -> value=-device, 00:00:35.327 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:00:35.327 ==> default: -> value=-drive, 00:00:35.327 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:35.327 ==> default: -> value=-device, 00:00:35.327 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.327 ==> default: Creating shared folders metadata... 00:00:35.327 ==> default: Starting domain. 00:00:37.242 ==> default: Waiting for domain to get an IP address... 00:00:55.363 ==> default: Waiting for SSH to become available... 00:00:55.363 ==> default: Configuring and enabling network interfaces... 00:00:57.906 default: SSH address: 192.168.121.222:22 00:00:57.906 default: SSH username: vagrant 00:00:57.906 default: SSH auth method: private key 00:00:59.824 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:08.109 ==> default: Mounting SSHFS shared folder... 00:01:09.053 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:09.053 ==> default: Checking Mount.. 00:01:10.440 ==> default: Folder Successfully Mounted! 00:01:10.440 00:01:10.440 SUCCESS! 00:01:10.440 00:01:10.440 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:10.440 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:10.440 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:10.440 00:01:10.451 [Pipeline] } 00:01:10.469 [Pipeline] // stage 00:01:10.481 [Pipeline] dir 00:01:10.482 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:10.484 [Pipeline] { 00:01:10.502 [Pipeline] catchError 00:01:10.504 [Pipeline] { 00:01:10.519 [Pipeline] sh 00:01:10.804 + vagrant ssh-config --host vagrant 00:01:10.804 + sed -ne '/^Host/,$p' 00:01:10.804 + tee ssh_conf 00:01:13.354 Host vagrant 00:01:13.354 HostName 192.168.121.222 00:01:13.354 User vagrant 00:01:13.354 Port 22 00:01:13.354 UserKnownHostsFile /dev/null 00:01:13.354 StrictHostKeyChecking no 00:01:13.354 PasswordAuthentication no 00:01:13.354 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:13.354 IdentitiesOnly yes 00:01:13.354 LogLevel FATAL 00:01:13.354 ForwardAgent yes 00:01:13.354 ForwardX11 yes 00:01:13.354 00:01:13.369 [Pipeline] withEnv 00:01:13.372 [Pipeline] { 00:01:13.386 [Pipeline] sh 00:01:13.669 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:13.670 source /etc/os-release 00:01:13.670 [[ -e /image.version ]] && img=$(< /image.version) 00:01:13.670 # Minimal, systemd-like check. 00:01:13.670 if [[ -e /.dockerenv ]]; then 00:01:13.670 # Clear garbage from the node'\''s name: 00:01:13.670 # agt-er_autotest_547-896 -> autotest_547-896 00:01:13.670 # $HOSTNAME is the actual container id 00:01:13.670 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:13.670 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:13.670 # We can assume this is a mount from a host where container is running, 00:01:13.670 # so fetch its hostname to easily identify the target swarm worker. 00:01:13.670 container="$(< /etc/hostname) ($agent)" 00:01:13.670 else 00:01:13.670 # Fallback 00:01:13.670 container=$agent 00:01:13.670 fi 00:01:13.670 fi 00:01:13.670 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:13.670 ' 00:01:13.945 [Pipeline] } 00:01:13.957 [Pipeline] // withEnv 00:01:13.964 [Pipeline] setCustomBuildProperty 00:01:13.978 [Pipeline] stage 00:01:13.979 [Pipeline] { (Tests) 00:01:13.994 [Pipeline] sh 00:01:14.323 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:14.611 [Pipeline] sh 00:01:14.903 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:15.177 [Pipeline] timeout 00:01:15.178 Timeout set to expire in 50 min 00:01:15.179 [Pipeline] { 00:01:15.192 [Pipeline] sh 00:01:15.476 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:16.048 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:16.062 [Pipeline] sh 00:01:16.347 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:16.626 [Pipeline] sh 00:01:16.911 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:17.186 [Pipeline] sh 00:01:17.464 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:17.724 ++ readlink -f spdk_repo 00:01:17.724 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:17.724 + [[ -n /home/vagrant/spdk_repo ]] 00:01:17.724 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:17.724 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:17.724 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:17.724 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:17.724 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:17.724 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:17.724 + cd /home/vagrant/spdk_repo 00:01:17.724 + source /etc/os-release 00:01:17.724 ++ NAME='Fedora Linux' 00:01:17.724 ++ VERSION='39 (Cloud Edition)' 00:01:17.724 ++ ID=fedora 00:01:17.724 ++ VERSION_ID=39 00:01:17.724 ++ VERSION_CODENAME= 00:01:17.724 ++ PLATFORM_ID=platform:f39 00:01:17.724 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:17.724 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:17.724 ++ LOGO=fedora-logo-icon 00:01:17.724 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:17.724 ++ HOME_URL=https://fedoraproject.org/ 00:01:17.724 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:17.724 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:17.724 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:17.724 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:17.724 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:17.724 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:17.724 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:17.724 ++ SUPPORT_END=2024-11-12 00:01:17.724 ++ VARIANT='Cloud Edition' 00:01:17.724 ++ VARIANT_ID=cloud 00:01:17.724 + uname -a 00:01:17.724 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:17.724 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:17.724 Hugepages 00:01:17.724 node hugesize free / total 00:01:17.724 node0 1048576kB 0 / 0 00:01:17.724 node0 2048kB 0 / 0 00:01:17.724 00:01:17.724 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:17.724 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:17.724 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:17.724 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:17.724 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:17.724 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:17.724 + rm -f /tmp/spdk-ld-path 00:01:17.724 + source autorun-spdk.conf 00:01:17.724 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.724 ++ SPDK_TEST_NVME=1 00:01:17.724 ++ SPDK_TEST_FTL=1 00:01:17.724 ++ SPDK_TEST_ISAL=1 00:01:17.724 ++ SPDK_RUN_ASAN=1 00:01:17.724 ++ SPDK_RUN_UBSAN=1 00:01:17.724 ++ SPDK_TEST_XNVME=1 00:01:17.724 ++ SPDK_TEST_NVME_FDP=1 00:01:17.724 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.724 ++ RUN_NIGHTLY=1 00:01:17.724 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:17.724 + [[ -n '' ]] 00:01:17.724 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:17.985 + for M in /var/spdk/build-*-manifest.txt 00:01:17.985 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:17.985 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:17.985 + for M in /var/spdk/build-*-manifest.txt 00:01:17.985 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:17.985 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:17.985 + for M in /var/spdk/build-*-manifest.txt 00:01:17.985 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:17.985 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:17.985 ++ uname 00:01:17.985 + [[ Linux == \L\i\n\u\x ]] 00:01:17.985 + sudo dmesg -T 00:01:17.985 + sudo dmesg --clear 00:01:17.985 + dmesg_pid=4991 00:01:17.985 + [[ Fedora Linux == FreeBSD ]] 00:01:17.985 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.985 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.985 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:17.985 + [[ -x /usr/src/fio-static/fio ]] 00:01:17.985 + sudo dmesg -Tw 00:01:17.985 + export FIO_BIN=/usr/src/fio-static/fio 00:01:17.985 + FIO_BIN=/usr/src/fio-static/fio 00:01:17.985 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:17.985 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:17.985 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:17.985 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.985 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.985 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:17.985 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.985 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.985 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:17.985 Test configuration: 00:01:17.985 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.985 SPDK_TEST_NVME=1 00:01:17.985 SPDK_TEST_FTL=1 00:01:17.985 SPDK_TEST_ISAL=1 00:01:17.985 SPDK_RUN_ASAN=1 00:01:17.985 SPDK_RUN_UBSAN=1 00:01:17.985 SPDK_TEST_XNVME=1 00:01:17.985 SPDK_TEST_NVME_FDP=1 00:01:17.985 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.985 RUN_NIGHTLY=1 10:28:48 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:17.985 10:28:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:17.985 10:28:48 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:17.985 10:28:48 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:17.985 10:28:48 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:17.985 10:28:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.985 10:28:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.985 10:28:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.985 10:28:48 -- paths/export.sh@5 -- $ export PATH 00:01:17.985 10:28:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.985 10:28:48 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:17.985 10:28:48 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:17.985 10:28:48 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733221728.XXXXXX 00:01:17.985 10:28:48 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733221728.7mVHcP 00:01:17.985 10:28:48 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:17.985 10:28:48 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:17.985 10:28:48 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:17.985 10:28:48 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:17.985 10:28:48 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:17.985 10:28:48 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:17.985 10:28:48 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:17.985 10:28:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.985 10:28:48 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:17.985 10:28:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.985 10:28:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.985 10:28:48 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:17.985 10:28:48 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.985 Tue Dec 3 10:28:48 AM UTC 2024 00:01:17.985 10:28:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.985 LTS-67-gc13c99a5e 00:01:17.985 10:28:48 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:17.985 10:28:48 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:17.985 10:28:48 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:17.985 10:28:48 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:17.985 10:28:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.246 ************************************ 00:01:18.246 START TEST asan 00:01:18.246 ************************************ 00:01:18.246 using asan 00:01:18.246 10:28:48 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:01:18.246 00:01:18.246 real 0m0.000s 00:01:18.246 user 0m0.000s 00:01:18.246 sys 0m0.000s 00:01:18.246 10:28:48 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:18.246 10:28:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.246 ************************************ 00:01:18.246 END TEST asan 00:01:18.246 ************************************ 00:01:18.246 10:28:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:18.246 10:28:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:18.246 10:28:48 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:18.246 10:28:48 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:18.246 10:28:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.246 ************************************ 00:01:18.246 START TEST ubsan 00:01:18.246 ************************************ 00:01:18.246 using ubsan 00:01:18.246 10:28:48 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:18.246 00:01:18.246 real 0m0.000s 00:01:18.246 user 0m0.000s 00:01:18.246 sys 0m0.000s 00:01:18.246 ************************************ 00:01:18.246 END TEST ubsan 00:01:18.246 ************************************ 00:01:18.246 10:28:48 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:18.246 10:28:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.246 10:28:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:18.246 10:28:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:18.246 10:28:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:18.246 10:28:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:18.246 10:28:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:18.246 10:28:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:18.246 10:28:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:18.246 10:28:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:18.246 10:28:48 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:18.246 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:18.246 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:18.816 Using 'verbs' RDMA provider 00:01:31.706 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:01:41.714 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:41.714 Creating mk/config.mk...done. 00:01:41.714 Creating mk/cc.flags.mk...done. 00:01:41.714 Type 'make' to build. 00:01:41.714 10:29:11 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:41.714 10:29:11 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:41.714 10:29:11 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:41.714 10:29:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.714 ************************************ 00:01:41.714 START TEST make 00:01:41.714 ************************************ 00:01:41.714 10:29:11 -- common/autotest_common.sh@1114 -- $ make -j10 00:01:41.714 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:41.714 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:41.714 meson setup builddir \ 00:01:41.714 -Dwith-libaio=enabled \ 00:01:41.714 -Dwith-liburing=enabled \ 00:01:41.714 -Dwith-libvfn=disabled \ 00:01:41.714 -Dwith-spdk=false && \ 00:01:41.714 meson compile -C builddir && \ 00:01:41.714 cd -) 00:01:41.714 make[1]: Nothing to be done for 'all'. 00:01:43.632 The Meson build system 00:01:43.632 Version: 1.5.0 00:01:43.632 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:43.632 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:43.632 Build type: native build 00:01:43.632 Project name: xnvme 00:01:43.632 Project version: 0.7.3 00:01:43.632 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:43.632 C linker for the host machine: cc ld.bfd 2.40-14 00:01:43.632 Host machine cpu family: x86_64 00:01:43.632 Host machine cpu: x86_64 00:01:43.632 Message: host_machine.system: linux 00:01:43.632 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:43.632 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:43.632 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:43.632 Run-time dependency threads found: YES 00:01:43.632 Has header "setupapi.h" : NO 00:01:43.632 Has header "linux/blkzoned.h" : YES 00:01:43.632 Has header "linux/blkzoned.h" : YES (cached) 00:01:43.632 Has header "libaio.h" : YES 00:01:43.632 Library aio found: YES 00:01:43.632 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:43.632 Run-time dependency liburing found: YES 2.2 00:01:43.632 Dependency libvfn skipped: feature with-libvfn disabled 00:01:43.632 Run-time dependency appleframeworks found: NO (tried framework) 00:01:43.632 Run-time dependency appleframeworks found: NO (tried framework) 00:01:43.632 Configuring xnvme_config.h using configuration 00:01:43.632 Configuring xnvme.spec using configuration 00:01:43.632 Run-time dependency bash-completion found: YES 2.11 00:01:43.632 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:43.632 Program cp found: YES (/usr/bin/cp) 00:01:43.632 Has header "winsock2.h" : NO 00:01:43.632 Has header "dbghelp.h" : NO 00:01:43.632 Library rpcrt4 found: NO 00:01:43.632 Library rt found: YES 00:01:43.632 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:43.632 Found CMake: /usr/bin/cmake (3.27.7) 00:01:43.632 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:01:43.632 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:01:43.632 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:01:43.632 Build targets in project: 32 00:01:43.632 00:01:43.632 xnvme 0.7.3 00:01:43.632 00:01:43.632 User defined options 00:01:43.632 with-libaio : enabled 00:01:43.633 with-liburing: enabled 00:01:43.633 with-libvfn : disabled 00:01:43.633 with-spdk : false 00:01:43.633 00:01:43.633 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.233 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:01:44.233 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:01:44.233 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:01:44.233 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:01:44.233 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:01:44.233 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:01:44.233 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:01:44.233 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:01:44.233 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:01:44.233 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:01:44.233 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:01:44.233 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:01:44.233 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:01:44.233 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:01:44.233 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:01:44.233 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:01:44.233 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:01:44.233 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:01:44.233 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:01:44.233 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:01:44.233 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:01:44.495 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:01:44.495 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:01:44.495 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:01:44.495 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:01:44.495 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:01:44.495 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:01:44.495 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:01:44.495 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:01:44.495 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:01:44.495 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:01:44.495 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:01:44.495 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:01:44.495 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:01:44.495 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:01:44.495 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:01:44.495 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:01:44.495 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:01:44.495 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:01:44.495 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:01:44.495 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:01:44.495 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:01:44.495 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:01:44.495 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:01:44.495 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:01:44.495 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:01:44.495 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:01:44.495 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:01:44.495 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:01:44.495 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:01:44.495 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:01:44.495 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:01:44.495 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:01:44.495 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:01:44.495 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:01:44.495 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:01:44.495 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:01:44.495 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:01:44.495 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:01:44.495 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:01:44.756 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:01:44.756 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:01:44.756 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:01:44.756 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:01:44.756 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:01:44.756 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:01:44.756 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:01:44.756 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:01:44.756 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:01:44.756 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:01:44.756 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:01:44.756 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:01:44.756 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:01:44.756 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:01:44.756 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:01:44.756 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:01:44.756 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:01:44.756 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:01:44.756 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:01:44.756 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:01:44.756 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:01:45.016 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:01:45.016 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:01:45.016 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:01:45.016 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:01:45.016 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:01:45.016 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:01:45.016 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:01:45.016 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:01:45.016 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:01:45.016 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:01:45.016 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:01:45.016 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:01:45.016 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:01:45.016 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:01:45.016 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:01:45.016 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:01:45.016 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:01:45.016 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:01:45.016 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:01:45.016 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:01:45.016 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:01:45.016 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:01:45.016 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:01:45.016 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:01:45.016 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:01:45.016 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:01:45.016 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:01:45.016 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:01:45.016 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:01:45.016 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:01:45.016 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:01:45.016 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:01:45.016 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:01:45.016 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:01:45.016 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:01:45.278 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:01:45.278 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:01:45.278 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:01:45.278 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:01:45.278 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:01:45.278 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:01:45.278 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:01:45.278 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:01:45.278 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:01:45.278 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:01:45.278 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:01:45.278 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:01:45.278 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:01:45.278 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:01:45.278 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:01:45.278 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:01:45.278 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:01:45.278 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:01:45.278 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:01:45.278 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:01:45.278 [136/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:01:45.278 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:01:45.278 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:01:45.278 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:01:45.278 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:01:45.539 [141/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:01:45.539 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:01:45.539 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:01:45.539 [144/203] Linking target lib/libxnvme.so 00:01:45.539 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:01:45.539 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:01:45.539 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:01:45.539 [148/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:01:45.539 [149/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:01:45.539 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:01:45.539 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:01:45.539 [152/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:01:45.539 [153/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:01:45.539 [154/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:01:45.539 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:01:45.539 [156/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:01:45.539 [157/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:01:45.539 [158/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:01:45.800 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:01:45.800 [160/203] Compiling C object tools/xdd.p/xdd.c.o 00:01:45.800 [161/203] Compiling C object tools/kvs.p/kvs.c.o 00:01:45.800 [162/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:01:45.800 [163/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:01:45.800 [164/203] Compiling C object tools/lblk.p/lblk.c.o 00:01:45.800 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:01:45.800 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:01:45.800 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:01:45.800 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:01:45.800 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:01:45.800 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:01:45.800 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:01:46.061 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:01:46.061 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:01:46.061 [174/203] Linking static target lib/libxnvme.a 00:01:46.061 [175/203] Linking target tests/xnvme_tests_cli 00:01:46.061 [176/203] Linking target tests/xnvme_tests_scc 00:01:46.061 [177/203] Linking target tests/xnvme_tests_ioworker 00:01:46.061 [178/203] Linking target tests/xnvme_tests_buf 00:01:46.061 [179/203] Linking target tests/xnvme_tests_async_intf 00:01:46.061 [180/203] Linking target tests/xnvme_tests_lblk 00:01:46.061 [181/203] Linking target tests/xnvme_tests_xnvme_cli 00:01:46.061 [182/203] Linking target tests/xnvme_tests_enum 00:01:46.061 [183/203] Linking target tests/xnvme_tests_xnvme_file 00:01:46.061 [184/203] Linking target tests/xnvme_tests_znd_append 00:01:46.061 [185/203] Linking target tests/xnvme_tests_znd_explicit_open 00:01:46.061 [186/203] Linking target tests/xnvme_tests_znd_state 00:01:46.061 [187/203] Linking target tests/xnvme_tests_kvs 00:01:46.061 [188/203] Linking target tools/xdd 00:01:46.061 [189/203] Linking target tests/xnvme_tests_znd_zrwa 00:01:46.061 [190/203] Linking target examples/xnvme_dev 00:01:46.061 [191/203] Linking target tools/xnvme 00:01:46.061 [192/203] Linking target tools/lblk 00:01:46.061 [193/203] Linking target tools/kvs 00:01:46.061 [194/203] Linking target tests/xnvme_tests_map 00:01:46.061 [195/203] Linking target examples/xnvme_hello 00:01:46.061 [196/203] Linking target examples/xnvme_enum 00:01:46.061 [197/203] Linking target tools/xnvme_file 00:01:46.061 [198/203] Linking target examples/xnvme_io_async 00:01:46.061 [199/203] Linking target tools/zoned 00:01:46.061 [200/203] Linking target examples/xnvme_single_sync 00:01:46.061 [201/203] Linking target examples/xnvme_single_async 00:01:46.061 [202/203] Linking target examples/zoned_io_async 00:01:46.061 [203/203] Linking target examples/zoned_io_sync 00:01:46.323 INFO: autodetecting backend as ninja 00:01:46.323 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:46.323 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:01:50.536 The Meson build system 00:01:50.536 Version: 1.5.0 00:01:50.536 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:50.536 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:50.536 Build type: native build 00:01:50.536 Program cat found: YES (/usr/bin/cat) 00:01:50.536 Project name: DPDK 00:01:50.536 Project version: 23.11.0 00:01:50.536 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:50.536 C linker for the host machine: cc ld.bfd 2.40-14 00:01:50.536 Host machine cpu family: x86_64 00:01:50.536 Host machine cpu: x86_64 00:01:50.536 Message: ## Building in Developer Mode ## 00:01:50.536 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:50.536 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:50.536 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:50.536 Program python3 found: YES (/usr/bin/python3) 00:01:50.536 Program cat found: YES (/usr/bin/cat) 00:01:50.536 Compiler for C supports arguments -march=native: YES 00:01:50.536 Checking for size of "void *" : 8 00:01:50.536 Checking for size of "void *" : 8 (cached) 00:01:50.536 Library m found: YES 00:01:50.536 Library numa found: YES 00:01:50.536 Has header "numaif.h" : YES 00:01:50.536 Library fdt found: NO 00:01:50.536 Library execinfo found: NO 00:01:50.536 Has header "execinfo.h" : YES 00:01:50.536 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:50.536 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:50.536 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:50.536 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:50.537 Run-time dependency openssl found: YES 3.1.1 00:01:50.537 Run-time dependency libpcap found: YES 1.10.4 00:01:50.537 Has header "pcap.h" with dependency libpcap: YES 00:01:50.537 Compiler for C supports arguments -Wcast-qual: YES 00:01:50.537 Compiler for C supports arguments -Wdeprecated: YES 00:01:50.537 Compiler for C supports arguments -Wformat: YES 00:01:50.537 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:50.537 Compiler for C supports arguments -Wformat-security: NO 00:01:50.537 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.537 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:50.537 Compiler for C supports arguments -Wnested-externs: YES 00:01:50.537 Compiler for C supports arguments -Wold-style-definition: YES 00:01:50.537 Compiler for C supports arguments -Wpointer-arith: YES 00:01:50.537 Compiler for C supports arguments -Wsign-compare: YES 00:01:50.537 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:50.537 Compiler for C supports arguments -Wundef: YES 00:01:50.537 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.537 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:50.537 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:50.537 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.537 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:50.537 Program objdump found: YES (/usr/bin/objdump) 00:01:50.537 Compiler for C supports arguments -mavx512f: YES 00:01:50.537 Checking if "AVX512 checking" compiles: YES 00:01:50.537 Fetching value of define "__SSE4_2__" : 1 00:01:50.537 Fetching value of define "__AES__" : 1 00:01:50.537 Fetching value of define "__AVX__" : 1 00:01:50.537 Fetching value of define "__AVX2__" : 1 00:01:50.537 Fetching value of define "__AVX512BW__" : 1 00:01:50.537 Fetching value of define "__AVX512CD__" : 1 00:01:50.537 Fetching value of define "__AVX512DQ__" : 1 00:01:50.537 Fetching value of define "__AVX512F__" : 1 00:01:50.537 Fetching value of define "__AVX512VL__" : 1 00:01:50.537 Fetching value of define "__PCLMUL__" : 1 00:01:50.537 Fetching value of define "__RDRND__" : 1 00:01:50.537 Fetching value of define "__RDSEED__" : 1 00:01:50.537 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:50.537 Fetching value of define "__znver1__" : (undefined) 00:01:50.537 Fetching value of define "__znver2__" : (undefined) 00:01:50.537 Fetching value of define "__znver3__" : (undefined) 00:01:50.537 Fetching value of define "__znver4__" : (undefined) 00:01:50.537 Library asan found: YES 00:01:50.537 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:50.537 Message: lib/log: Defining dependency "log" 00:01:50.537 Message: lib/kvargs: Defining dependency "kvargs" 00:01:50.537 Message: lib/telemetry: Defining dependency "telemetry" 00:01:50.537 Library rt found: YES 00:01:50.537 Checking for function "getentropy" : NO 00:01:50.537 Message: lib/eal: Defining dependency "eal" 00:01:50.537 Message: lib/ring: Defining dependency "ring" 00:01:50.537 Message: lib/rcu: Defining dependency "rcu" 00:01:50.537 Message: lib/mempool: Defining dependency "mempool" 00:01:50.537 Message: lib/mbuf: Defining dependency "mbuf" 00:01:50.537 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:50.537 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:50.537 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:50.537 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:50.537 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:50.537 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:50.537 Compiler for C supports arguments -mpclmul: YES 00:01:50.537 Compiler for C supports arguments -maes: YES 00:01:50.537 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.537 Compiler for C supports arguments -mavx512bw: YES 00:01:50.537 Compiler for C supports arguments -mavx512dq: YES 00:01:50.537 Compiler for C supports arguments -mavx512vl: YES 00:01:50.537 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:50.537 Compiler for C supports arguments -mavx2: YES 00:01:50.537 Compiler for C supports arguments -mavx: YES 00:01:50.537 Message: lib/net: Defining dependency "net" 00:01:50.537 Message: lib/meter: Defining dependency "meter" 00:01:50.537 Message: lib/ethdev: Defining dependency "ethdev" 00:01:50.537 Message: lib/pci: Defining dependency "pci" 00:01:50.537 Message: lib/cmdline: Defining dependency "cmdline" 00:01:50.537 Message: lib/hash: Defining dependency "hash" 00:01:50.537 Message: lib/timer: Defining dependency "timer" 00:01:50.537 Message: lib/compressdev: Defining dependency "compressdev" 00:01:50.537 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:50.537 Message: lib/dmadev: Defining dependency "dmadev" 00:01:50.537 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:50.537 Message: lib/power: Defining dependency "power" 00:01:50.537 Message: lib/reorder: Defining dependency "reorder" 00:01:50.537 Message: lib/security: Defining dependency "security" 00:01:50.537 Has header "linux/userfaultfd.h" : YES 00:01:50.537 Has header "linux/vduse.h" : YES 00:01:50.537 Message: lib/vhost: Defining dependency "vhost" 00:01:50.537 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.537 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.537 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.537 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.537 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:50.537 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:50.537 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:50.537 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:50.537 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:50.537 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:50.537 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:50.537 Configuring doxy-api-html.conf using configuration 00:01:50.537 Configuring doxy-api-man.conf using configuration 00:01:50.537 Program mandb found: YES (/usr/bin/mandb) 00:01:50.537 Program sphinx-build found: NO 00:01:50.537 Configuring rte_build_config.h using configuration 00:01:50.537 Message: 00:01:50.537 ================= 00:01:50.537 Applications Enabled 00:01:50.537 ================= 00:01:50.537 00:01:50.537 apps: 00:01:50.537 00:01:50.537 00:01:50.537 Message: 00:01:50.537 ================= 00:01:50.537 Libraries Enabled 00:01:50.537 ================= 00:01:50.537 00:01:50.537 libs: 00:01:50.537 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:50.537 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:50.537 cryptodev, dmadev, power, reorder, security, vhost, 00:01:50.537 00:01:50.537 Message: 00:01:50.537 =============== 00:01:50.537 Drivers Enabled 00:01:50.537 =============== 00:01:50.537 00:01:50.537 common: 00:01:50.537 00:01:50.537 bus: 00:01:50.537 pci, vdev, 00:01:50.537 mempool: 00:01:50.537 ring, 00:01:50.537 dma: 00:01:50.537 00:01:50.537 net: 00:01:50.537 00:01:50.537 crypto: 00:01:50.537 00:01:50.537 compress: 00:01:50.537 00:01:50.537 vdpa: 00:01:50.537 00:01:50.537 00:01:50.537 Message: 00:01:50.537 ================= 00:01:50.537 Content Skipped 00:01:50.537 ================= 00:01:50.537 00:01:50.537 apps: 00:01:50.537 dumpcap: explicitly disabled via build config 00:01:50.537 graph: explicitly disabled via build config 00:01:50.537 pdump: explicitly disabled via build config 00:01:50.537 proc-info: explicitly disabled via build config 00:01:50.537 test-acl: explicitly disabled via build config 00:01:50.537 test-bbdev: explicitly disabled via build config 00:01:50.537 test-cmdline: explicitly disabled via build config 00:01:50.537 test-compress-perf: explicitly disabled via build config 00:01:50.537 test-crypto-perf: explicitly disabled via build config 00:01:50.537 test-dma-perf: explicitly disabled via build config 00:01:50.537 test-eventdev: explicitly disabled via build config 00:01:50.537 test-fib: explicitly disabled via build config 00:01:50.537 test-flow-perf: explicitly disabled via build config 00:01:50.537 test-gpudev: explicitly disabled via build config 00:01:50.537 test-mldev: explicitly disabled via build config 00:01:50.537 test-pipeline: explicitly disabled via build config 00:01:50.537 test-pmd: explicitly disabled via build config 00:01:50.537 test-regex: explicitly disabled via build config 00:01:50.537 test-sad: explicitly disabled via build config 00:01:50.537 test-security-perf: explicitly disabled via build config 00:01:50.537 00:01:50.537 libs: 00:01:50.537 metrics: explicitly disabled via build config 00:01:50.537 acl: explicitly disabled via build config 00:01:50.537 bbdev: explicitly disabled via build config 00:01:50.537 bitratestats: explicitly disabled via build config 00:01:50.537 bpf: explicitly disabled via build config 00:01:50.537 cfgfile: explicitly disabled via build config 00:01:50.537 distributor: explicitly disabled via build config 00:01:50.537 efd: explicitly disabled via build config 00:01:50.537 eventdev: explicitly disabled via build config 00:01:50.537 dispatcher: explicitly disabled via build config 00:01:50.537 gpudev: explicitly disabled via build config 00:01:50.537 gro: explicitly disabled via build config 00:01:50.537 gso: explicitly disabled via build config 00:01:50.537 ip_frag: explicitly disabled via build config 00:01:50.537 jobstats: explicitly disabled via build config 00:01:50.537 latencystats: explicitly disabled via build config 00:01:50.537 lpm: explicitly disabled via build config 00:01:50.537 member: explicitly disabled via build config 00:01:50.537 pcapng: explicitly disabled via build config 00:01:50.537 rawdev: explicitly disabled via build config 00:01:50.537 regexdev: explicitly disabled via build config 00:01:50.537 mldev: explicitly disabled via build config 00:01:50.537 rib: explicitly disabled via build config 00:01:50.537 sched: explicitly disabled via build config 00:01:50.537 stack: explicitly disabled via build config 00:01:50.537 ipsec: explicitly disabled via build config 00:01:50.537 pdcp: explicitly disabled via build config 00:01:50.538 fib: explicitly disabled via build config 00:01:50.538 port: explicitly disabled via build config 00:01:50.538 pdump: explicitly disabled via build config 00:01:50.538 table: explicitly disabled via build config 00:01:50.538 pipeline: explicitly disabled via build config 00:01:50.538 graph: explicitly disabled via build config 00:01:50.538 node: explicitly disabled via build config 00:01:50.538 00:01:50.538 drivers: 00:01:50.538 common/cpt: not in enabled drivers build config 00:01:50.538 common/dpaax: not in enabled drivers build config 00:01:50.538 common/iavf: not in enabled drivers build config 00:01:50.538 common/idpf: not in enabled drivers build config 00:01:50.538 common/mvep: not in enabled drivers build config 00:01:50.538 common/octeontx: not in enabled drivers build config 00:01:50.538 bus/auxiliary: not in enabled drivers build config 00:01:50.538 bus/cdx: not in enabled drivers build config 00:01:50.538 bus/dpaa: not in enabled drivers build config 00:01:50.538 bus/fslmc: not in enabled drivers build config 00:01:50.538 bus/ifpga: not in enabled drivers build config 00:01:50.538 bus/platform: not in enabled drivers build config 00:01:50.538 bus/vmbus: not in enabled drivers build config 00:01:50.538 common/cnxk: not in enabled drivers build config 00:01:50.538 common/mlx5: not in enabled drivers build config 00:01:50.538 common/nfp: not in enabled drivers build config 00:01:50.538 common/qat: not in enabled drivers build config 00:01:50.538 common/sfc_efx: not in enabled drivers build config 00:01:50.538 mempool/bucket: not in enabled drivers build config 00:01:50.538 mempool/cnxk: not in enabled drivers build config 00:01:50.538 mempool/dpaa: not in enabled drivers build config 00:01:50.538 mempool/dpaa2: not in enabled drivers build config 00:01:50.538 mempool/octeontx: not in enabled drivers build config 00:01:50.538 mempool/stack: not in enabled drivers build config 00:01:50.538 dma/cnxk: not in enabled drivers build config 00:01:50.538 dma/dpaa: not in enabled drivers build config 00:01:50.538 dma/dpaa2: not in enabled drivers build config 00:01:50.538 dma/hisilicon: not in enabled drivers build config 00:01:50.538 dma/idxd: not in enabled drivers build config 00:01:50.538 dma/ioat: not in enabled drivers build config 00:01:50.538 dma/skeleton: not in enabled drivers build config 00:01:50.538 net/af_packet: not in enabled drivers build config 00:01:50.538 net/af_xdp: not in enabled drivers build config 00:01:50.538 net/ark: not in enabled drivers build config 00:01:50.538 net/atlantic: not in enabled drivers build config 00:01:50.538 net/avp: not in enabled drivers build config 00:01:50.538 net/axgbe: not in enabled drivers build config 00:01:50.538 net/bnx2x: not in enabled drivers build config 00:01:50.538 net/bnxt: not in enabled drivers build config 00:01:50.538 net/bonding: not in enabled drivers build config 00:01:50.538 net/cnxk: not in enabled drivers build config 00:01:50.538 net/cpfl: not in enabled drivers build config 00:01:50.538 net/cxgbe: not in enabled drivers build config 00:01:50.538 net/dpaa: not in enabled drivers build config 00:01:50.538 net/dpaa2: not in enabled drivers build config 00:01:50.538 net/e1000: not in enabled drivers build config 00:01:50.538 net/ena: not in enabled drivers build config 00:01:50.538 net/enetc: not in enabled drivers build config 00:01:50.538 net/enetfec: not in enabled drivers build config 00:01:50.538 net/enic: not in enabled drivers build config 00:01:50.538 net/failsafe: not in enabled drivers build config 00:01:50.538 net/fm10k: not in enabled drivers build config 00:01:50.538 net/gve: not in enabled drivers build config 00:01:50.538 net/hinic: not in enabled drivers build config 00:01:50.538 net/hns3: not in enabled drivers build config 00:01:50.538 net/i40e: not in enabled drivers build config 00:01:50.538 net/iavf: not in enabled drivers build config 00:01:50.538 net/ice: not in enabled drivers build config 00:01:50.538 net/idpf: not in enabled drivers build config 00:01:50.538 net/igc: not in enabled drivers build config 00:01:50.538 net/ionic: not in enabled drivers build config 00:01:50.538 net/ipn3ke: not in enabled drivers build config 00:01:50.538 net/ixgbe: not in enabled drivers build config 00:01:50.538 net/mana: not in enabled drivers build config 00:01:50.538 net/memif: not in enabled drivers build config 00:01:50.538 net/mlx4: not in enabled drivers build config 00:01:50.538 net/mlx5: not in enabled drivers build config 00:01:50.538 net/mvneta: not in enabled drivers build config 00:01:50.538 net/mvpp2: not in enabled drivers build config 00:01:50.538 net/netvsc: not in enabled drivers build config 00:01:50.538 net/nfb: not in enabled drivers build config 00:01:50.538 net/nfp: not in enabled drivers build config 00:01:50.538 net/ngbe: not in enabled drivers build config 00:01:50.538 net/null: not in enabled drivers build config 00:01:50.538 net/octeontx: not in enabled drivers build config 00:01:50.538 net/octeon_ep: not in enabled drivers build config 00:01:50.538 net/pcap: not in enabled drivers build config 00:01:50.538 net/pfe: not in enabled drivers build config 00:01:50.538 net/qede: not in enabled drivers build config 00:01:50.538 net/ring: not in enabled drivers build config 00:01:50.538 net/sfc: not in enabled drivers build config 00:01:50.538 net/softnic: not in enabled drivers build config 00:01:50.538 net/tap: not in enabled drivers build config 00:01:50.538 net/thunderx: not in enabled drivers build config 00:01:50.538 net/txgbe: not in enabled drivers build config 00:01:50.538 net/vdev_netvsc: not in enabled drivers build config 00:01:50.538 net/vhost: not in enabled drivers build config 00:01:50.538 net/virtio: not in enabled drivers build config 00:01:50.538 net/vmxnet3: not in enabled drivers build config 00:01:50.538 raw/*: missing internal dependency, "rawdev" 00:01:50.538 crypto/armv8: not in enabled drivers build config 00:01:50.538 crypto/bcmfs: not in enabled drivers build config 00:01:50.538 crypto/caam_jr: not in enabled drivers build config 00:01:50.538 crypto/ccp: not in enabled drivers build config 00:01:50.538 crypto/cnxk: not in enabled drivers build config 00:01:50.538 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.538 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.538 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.538 crypto/mlx5: not in enabled drivers build config 00:01:50.538 crypto/mvsam: not in enabled drivers build config 00:01:50.538 crypto/nitrox: not in enabled drivers build config 00:01:50.538 crypto/null: not in enabled drivers build config 00:01:50.538 crypto/octeontx: not in enabled drivers build config 00:01:50.538 crypto/openssl: not in enabled drivers build config 00:01:50.538 crypto/scheduler: not in enabled drivers build config 00:01:50.538 crypto/uadk: not in enabled drivers build config 00:01:50.538 crypto/virtio: not in enabled drivers build config 00:01:50.538 compress/isal: not in enabled drivers build config 00:01:50.538 compress/mlx5: not in enabled drivers build config 00:01:50.538 compress/octeontx: not in enabled drivers build config 00:01:50.538 compress/zlib: not in enabled drivers build config 00:01:50.538 regex/*: missing internal dependency, "regexdev" 00:01:50.538 ml/*: missing internal dependency, "mldev" 00:01:50.538 vdpa/ifc: not in enabled drivers build config 00:01:50.538 vdpa/mlx5: not in enabled drivers build config 00:01:50.538 vdpa/nfp: not in enabled drivers build config 00:01:50.538 vdpa/sfc: not in enabled drivers build config 00:01:50.538 event/*: missing internal dependency, "eventdev" 00:01:50.538 baseband/*: missing internal dependency, "bbdev" 00:01:50.538 gpu/*: missing internal dependency, "gpudev" 00:01:50.538 00:01:50.538 00:01:51.112 Build targets in project: 84 00:01:51.112 00:01:51.112 DPDK 23.11.0 00:01:51.112 00:01:51.112 User defined options 00:01:51.112 buildtype : debug 00:01:51.112 default_library : shared 00:01:51.112 libdir : lib 00:01:51.112 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:51.112 b_sanitize : address 00:01:51.112 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:51.112 c_link_args : 00:01:51.112 cpu_instruction_set: native 00:01:51.112 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:51.112 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:51.112 enable_docs : false 00:01:51.112 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:51.112 enable_kmods : false 00:01:51.112 tests : false 00:01:51.112 00:01:51.112 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.373 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:51.373 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:51.373 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.373 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.373 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.373 [5/264] Linking static target lib/librte_kvargs.a 00:01:51.373 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:51.373 [7/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.373 [8/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.373 [9/264] Linking static target lib/librte_log.a 00:01:51.633 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.633 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.633 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.891 [13/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.891 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.891 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.891 [16/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.891 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.891 [18/264] Linking static target lib/librte_telemetry.a 00:01:52.151 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:52.151 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.151 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:52.151 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.151 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:52.151 [24/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.413 [25/264] Linking target lib/librte_log.so.24.0 00:01:52.413 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.413 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.413 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:52.413 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:52.413 [30/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:52.413 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.674 [32/264] Linking target lib/librte_kvargs.so.24.0 00:01:52.674 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.674 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.674 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:52.674 [36/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:52.674 [37/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.674 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:52.674 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:52.674 [40/264] Linking target lib/librte_telemetry.so.24.0 00:01:52.935 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:52.935 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.935 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.935 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:52.935 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.935 [46/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:52.935 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.935 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:53.196 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:53.196 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:53.196 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:53.196 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:53.196 [53/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:53.196 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:53.196 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:53.196 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:53.196 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:53.458 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:53.458 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:53.458 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:53.458 [61/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:53.458 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:53.458 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:53.458 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:53.719 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:53.719 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:53.719 [67/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:53.719 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:53.719 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:53.719 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:53.719 [71/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:53.719 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:53.981 [73/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:53.981 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:53.981 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:53.981 [76/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:53.981 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:53.981 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:53.981 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:54.243 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:54.243 [81/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:54.243 [82/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:54.243 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:54.243 [84/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:54.243 [85/264] Linking static target lib/librte_ring.a 00:01:54.243 [86/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:54.504 [87/264] Linking static target lib/librte_eal.a 00:01:54.504 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:54.504 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:54.504 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:54.504 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:54.504 [92/264] Linking static target lib/librte_mempool.a 00:01:54.805 [93/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:54.805 [94/264] Linking static target lib/librte_rcu.a 00:01:54.805 [95/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.805 [96/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:54.805 [97/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:54.805 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:55.072 [99/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:55.072 [100/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:55.072 [101/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.072 [102/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:55.072 [103/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.072 [104/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.331 [105/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.331 [106/264] Linking static target lib/librte_mbuf.a 00:01:55.331 [107/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:55.331 [108/264] Linking static target lib/librte_meter.a 00:01:55.593 [109/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:55.593 [110/264] Linking static target lib/librte_net.a 00:01:55.593 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.593 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.593 [113/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.593 [114/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.854 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:55.854 [116/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.854 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.854 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:56.115 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:56.376 [120/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.376 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:56.376 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:56.376 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:56.376 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:56.638 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:56.638 [126/264] Linking static target lib/librte_pci.a 00:01:56.638 [127/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:56.638 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:56.638 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:56.638 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:56.638 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.638 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.638 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.638 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:56.638 [135/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:56.638 [136/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.638 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:56.898 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.898 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:56.898 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:56.898 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:56.898 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:56.898 [143/264] Linking static target lib/librte_cmdline.a 00:01:56.898 [144/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:57.158 [145/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:57.158 [146/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:57.158 [147/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:57.158 [148/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:57.158 [149/264] Linking static target lib/librte_timer.a 00:01:57.158 [150/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:57.158 [151/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:57.418 [152/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:57.678 [153/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:57.678 [154/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:57.678 [155/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:57.678 [156/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:57.678 [157/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.678 [158/264] Linking static target lib/librte_dmadev.a 00:01:57.678 [159/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:57.678 [160/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:57.678 [161/264] Linking static target lib/librte_compressdev.a 00:01:57.678 [162/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:57.938 [163/264] Linking static target lib/librte_hash.a 00:01:57.938 [164/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:57.938 [165/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:57.938 [166/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:57.938 [167/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:57.938 [168/264] Linking static target lib/librte_ethdev.a 00:01:58.252 [169/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:58.252 [170/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.252 [171/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.253 [172/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.253 [173/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.512 [174/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.512 [175/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.512 [176/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:58.512 [177/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.512 [178/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.512 [179/264] Linking static target lib/librte_power.a 00:01:58.512 [180/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.773 [181/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:58.773 [182/264] Linking static target lib/librte_cryptodev.a 00:01:58.773 [183/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:58.773 [184/264] Linking static target lib/librte_reorder.a 00:01:58.773 [185/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.773 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.033 [187/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.033 [188/264] Linking static target lib/librte_security.a 00:01:59.033 [189/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.033 [190/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.599 [191/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.599 [192/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:59.599 [193/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:59.599 [194/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:59.599 [195/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.599 [196/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:59.861 [197/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:59.861 [198/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.861 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:59.861 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:59.861 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:00.121 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:00.121 [203/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:00.121 [204/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:00.122 [205/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:00.122 [206/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:00.381 [207/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.381 [208/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:00.381 [209/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:00.381 [210/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:00.381 [211/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.381 [212/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:00.381 [213/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.381 [214/264] Linking static target drivers/librte_bus_vdev.a 00:02:00.381 [215/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.381 [216/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.381 [217/264] Linking static target drivers/librte_bus_pci.a 00:02:00.381 [218/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:00.381 [219/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:00.381 [220/264] Linking static target drivers/librte_mempool_ring.a 00:02:00.381 [221/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:00.642 [222/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.642 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.903 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:02.290 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.290 [226/264] Linking target lib/librte_eal.so.24.0 00:02:02.290 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:02.290 [228/264] Linking target lib/librte_ring.so.24.0 00:02:02.290 [229/264] Linking target lib/librte_meter.so.24.0 00:02:02.290 [230/264] Linking target lib/librte_pci.so.24.0 00:02:02.290 [231/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:02.290 [232/264] Linking target lib/librte_timer.so.24.0 00:02:02.290 [233/264] Linking target lib/librte_dmadev.so.24.0 00:02:02.290 [234/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:02.290 [235/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:02.290 [236/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:02.290 [237/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:02.290 [238/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:02.290 [239/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:02.551 [240/264] Linking target lib/librte_rcu.so.24.0 00:02:02.551 [241/264] Linking target lib/librte_mempool.so.24.0 00:02:02.551 [242/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:02.551 [243/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:02.551 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:02.551 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:02.551 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:02.811 [247/264] Linking target lib/librte_reorder.so.24.0 00:02:02.811 [248/264] Linking target lib/librte_compressdev.so.24.0 00:02:02.811 [249/264] Linking target lib/librte_cryptodev.so.24.0 00:02:02.811 [250/264] Linking target lib/librte_net.so.24.0 00:02:02.811 [251/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:02.811 [252/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:02.811 [253/264] Linking target lib/librte_security.so.24.0 00:02:02.811 [254/264] Linking target lib/librte_hash.so.24.0 00:02:02.811 [255/264] Linking target lib/librte_cmdline.so.24.0 00:02:03.071 [256/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:03.331 [257/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.593 [258/264] Linking target lib/librte_ethdev.so.24.0 00:02:03.593 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:03.593 [260/264] Linking target lib/librte_power.so.24.0 00:02:04.976 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:04.976 [262/264] Linking static target lib/librte_vhost.a 00:02:05.918 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.918 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:05.918 INFO: autodetecting backend as ninja 00:02:05.918 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:06.862 CC lib/ut_mock/mock.o 00:02:06.862 CC lib/log/log.o 00:02:06.862 CC lib/log/log_flags.o 00:02:06.862 CC lib/ut/ut.o 00:02:06.862 CC lib/log/log_deprecated.o 00:02:06.862 LIB libspdk_ut_mock.a 00:02:06.862 SO libspdk_ut_mock.so.5.0 00:02:06.862 LIB libspdk_log.a 00:02:06.862 LIB libspdk_ut.a 00:02:07.124 SO libspdk_log.so.6.1 00:02:07.124 SO libspdk_ut.so.1.0 00:02:07.124 SYMLINK libspdk_ut_mock.so 00:02:07.124 SYMLINK libspdk_ut.so 00:02:07.124 SYMLINK libspdk_log.so 00:02:07.124 CC lib/dma/dma.o 00:02:07.124 CC lib/ioat/ioat.o 00:02:07.124 CC lib/util/base64.o 00:02:07.124 CC lib/util/cpuset.o 00:02:07.124 CC lib/util/bit_array.o 00:02:07.124 CC lib/util/crc32.o 00:02:07.124 CC lib/util/crc16.o 00:02:07.124 CC lib/util/crc32c.o 00:02:07.124 CXX lib/trace_parser/trace.o 00:02:07.124 CC lib/vfio_user/host/vfio_user_pci.o 00:02:07.385 CC lib/util/crc32_ieee.o 00:02:07.385 CC lib/util/crc64.o 00:02:07.385 CC lib/util/dif.o 00:02:07.385 CC lib/util/fd.o 00:02:07.385 LIB libspdk_dma.a 00:02:07.385 CC lib/util/file.o 00:02:07.385 SO libspdk_dma.so.3.0 00:02:07.385 LIB libspdk_ioat.a 00:02:07.385 CC lib/vfio_user/host/vfio_user.o 00:02:07.385 CC lib/util/hexlify.o 00:02:07.385 SYMLINK libspdk_dma.so 00:02:07.385 CC lib/util/iov.o 00:02:07.385 CC lib/util/math.o 00:02:07.385 SO libspdk_ioat.so.6.0 00:02:07.385 CC lib/util/pipe.o 00:02:07.385 CC lib/util/strerror_tls.o 00:02:07.385 SYMLINK libspdk_ioat.so 00:02:07.385 CC lib/util/string.o 00:02:07.385 CC lib/util/uuid.o 00:02:07.385 CC lib/util/fd_group.o 00:02:07.385 CC lib/util/xor.o 00:02:07.385 CC lib/util/zipf.o 00:02:07.646 LIB libspdk_vfio_user.a 00:02:07.646 SO libspdk_vfio_user.so.4.0 00:02:07.646 SYMLINK libspdk_vfio_user.so 00:02:07.905 LIB libspdk_util.a 00:02:07.905 SO libspdk_util.so.8.0 00:02:07.905 LIB libspdk_trace_parser.a 00:02:07.905 SYMLINK libspdk_util.so 00:02:07.905 SO libspdk_trace_parser.so.4.0 00:02:08.165 SYMLINK libspdk_trace_parser.so 00:02:08.165 CC lib/env_dpdk/env.o 00:02:08.165 CC lib/env_dpdk/memory.o 00:02:08.165 CC lib/env_dpdk/pci.o 00:02:08.165 CC lib/env_dpdk/threads.o 00:02:08.165 CC lib/env_dpdk/init.o 00:02:08.165 CC lib/rdma/common.o 00:02:08.165 CC lib/idxd/idxd.o 00:02:08.165 CC lib/json/json_parse.o 00:02:08.165 CC lib/vmd/vmd.o 00:02:08.165 CC lib/conf/conf.o 00:02:08.165 CC lib/vmd/led.o 00:02:08.425 CC lib/rdma/rdma_verbs.o 00:02:08.425 CC lib/json/json_util.o 00:02:08.425 LIB libspdk_conf.a 00:02:08.425 CC lib/json/json_write.o 00:02:08.425 SO libspdk_conf.so.5.0 00:02:08.425 SYMLINK libspdk_conf.so 00:02:08.425 CC lib/idxd/idxd_user.o 00:02:08.425 CC lib/env_dpdk/pci_ioat.o 00:02:08.425 LIB libspdk_rdma.a 00:02:08.425 CC lib/env_dpdk/pci_virtio.o 00:02:08.425 SO libspdk_rdma.so.5.0 00:02:08.425 CC lib/env_dpdk/pci_vmd.o 00:02:08.425 SYMLINK libspdk_rdma.so 00:02:08.425 CC lib/idxd/idxd_kernel.o 00:02:08.425 CC lib/env_dpdk/pci_idxd.o 00:02:08.685 CC lib/env_dpdk/pci_event.o 00:02:08.685 LIB libspdk_json.a 00:02:08.685 CC lib/env_dpdk/sigbus_handler.o 00:02:08.685 SO libspdk_json.so.5.1 00:02:08.685 CC lib/env_dpdk/pci_dpdk.o 00:02:08.685 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:08.685 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:08.685 LIB libspdk_idxd.a 00:02:08.685 SYMLINK libspdk_json.so 00:02:08.685 SO libspdk_idxd.so.11.0 00:02:08.685 SYMLINK libspdk_idxd.so 00:02:08.685 CC lib/jsonrpc/jsonrpc_server.o 00:02:08.685 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:08.685 CC lib/jsonrpc/jsonrpc_client.o 00:02:08.685 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:08.685 LIB libspdk_vmd.a 00:02:08.685 SO libspdk_vmd.so.5.0 00:02:08.945 SYMLINK libspdk_vmd.so 00:02:08.945 LIB libspdk_jsonrpc.a 00:02:08.945 SO libspdk_jsonrpc.so.5.1 00:02:09.206 SYMLINK libspdk_jsonrpc.so 00:02:09.206 CC lib/rpc/rpc.o 00:02:09.466 LIB libspdk_rpc.a 00:02:09.466 SO libspdk_rpc.so.5.0 00:02:09.466 LIB libspdk_env_dpdk.a 00:02:09.466 SYMLINK libspdk_rpc.so 00:02:09.466 SO libspdk_env_dpdk.so.13.0 00:02:09.728 CC lib/trace/trace.o 00:02:09.728 CC lib/sock/sock_rpc.o 00:02:09.728 CC lib/sock/sock.o 00:02:09.728 CC lib/trace/trace_rpc.o 00:02:09.728 CC lib/trace/trace_flags.o 00:02:09.728 CC lib/notify/notify.o 00:02:09.728 CC lib/notify/notify_rpc.o 00:02:09.728 SYMLINK libspdk_env_dpdk.so 00:02:09.728 LIB libspdk_notify.a 00:02:09.728 SO libspdk_notify.so.5.0 00:02:09.728 LIB libspdk_trace.a 00:02:09.990 SYMLINK libspdk_notify.so 00:02:09.990 SO libspdk_trace.so.9.0 00:02:09.990 SYMLINK libspdk_trace.so 00:02:09.990 LIB libspdk_sock.a 00:02:09.990 SO libspdk_sock.so.8.0 00:02:09.990 SYMLINK libspdk_sock.so 00:02:09.990 CC lib/thread/iobuf.o 00:02:09.990 CC lib/thread/thread.o 00:02:10.250 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:10.250 CC lib/nvme/nvme_fabric.o 00:02:10.250 CC lib/nvme/nvme_ctrlr.o 00:02:10.250 CC lib/nvme/nvme_pcie.o 00:02:10.250 CC lib/nvme/nvme_ns_cmd.o 00:02:10.250 CC lib/nvme/nvme_ns.o 00:02:10.250 CC lib/nvme/nvme_pcie_common.o 00:02:10.250 CC lib/nvme/nvme_qpair.o 00:02:10.250 CC lib/nvme/nvme.o 00:02:10.820 CC lib/nvme/nvme_quirks.o 00:02:10.820 CC lib/nvme/nvme_transport.o 00:02:10.820 CC lib/nvme/nvme_discovery.o 00:02:10.820 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:10.820 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:11.079 CC lib/nvme/nvme_tcp.o 00:02:11.079 CC lib/nvme/nvme_opal.o 00:02:11.079 CC lib/nvme/nvme_io_msg.o 00:02:11.079 CC lib/nvme/nvme_poll_group.o 00:02:11.337 CC lib/nvme/nvme_zns.o 00:02:11.337 CC lib/nvme/nvme_cuse.o 00:02:11.337 CC lib/nvme/nvme_vfio_user.o 00:02:11.337 CC lib/nvme/nvme_rdma.o 00:02:11.594 LIB libspdk_thread.a 00:02:11.594 SO libspdk_thread.so.9.0 00:02:11.853 SYMLINK libspdk_thread.so 00:02:11.853 CC lib/init/json_config.o 00:02:11.853 CC lib/init/subsystem.o 00:02:11.853 CC lib/blob/blobstore.o 00:02:11.853 CC lib/accel/accel.o 00:02:11.853 CC lib/virtio/virtio.o 00:02:11.853 CC lib/init/subsystem_rpc.o 00:02:11.853 CC lib/init/rpc.o 00:02:11.853 CC lib/accel/accel_rpc.o 00:02:12.111 CC lib/accel/accel_sw.o 00:02:12.111 LIB libspdk_init.a 00:02:12.111 SO libspdk_init.so.4.0 00:02:12.111 SYMLINK libspdk_init.so 00:02:12.111 CC lib/blob/request.o 00:02:12.111 CC lib/virtio/virtio_vhost_user.o 00:02:12.111 CC lib/blob/zeroes.o 00:02:12.111 CC lib/blob/blob_bs_dev.o 00:02:12.369 CC lib/virtio/virtio_vfio_user.o 00:02:12.369 CC lib/virtio/virtio_pci.o 00:02:12.369 CC lib/event/app.o 00:02:12.369 CC lib/event/reactor.o 00:02:12.369 CC lib/event/log_rpc.o 00:02:12.369 CC lib/event/app_rpc.o 00:02:12.369 CC lib/event/scheduler_static.o 00:02:12.369 LIB libspdk_nvme.a 00:02:12.626 SO libspdk_nvme.so.12.0 00:02:12.626 LIB libspdk_virtio.a 00:02:12.626 SO libspdk_virtio.so.6.0 00:02:12.626 LIB libspdk_accel.a 00:02:12.626 SO libspdk_accel.so.14.0 00:02:12.626 SYMLINK libspdk_virtio.so 00:02:12.884 SYMLINK libspdk_accel.so 00:02:12.884 SYMLINK libspdk_nvme.so 00:02:12.884 LIB libspdk_event.a 00:02:12.884 SO libspdk_event.so.12.0 00:02:12.884 CC lib/bdev/bdev.o 00:02:12.884 CC lib/bdev/bdev_rpc.o 00:02:12.884 CC lib/bdev/bdev_zone.o 00:02:12.884 CC lib/bdev/scsi_nvme.o 00:02:12.884 CC lib/bdev/part.o 00:02:12.884 SYMLINK libspdk_event.so 00:02:14.870 LIB libspdk_blob.a 00:02:15.127 SO libspdk_blob.so.10.1 00:02:15.127 LIB libspdk_bdev.a 00:02:15.127 SYMLINK libspdk_blob.so 00:02:15.127 SO libspdk_bdev.so.14.0 00:02:15.127 SYMLINK libspdk_bdev.so 00:02:15.127 CC lib/lvol/lvol.o 00:02:15.127 CC lib/blobfs/blobfs.o 00:02:15.127 CC lib/blobfs/tree.o 00:02:15.384 CC lib/nvmf/ctrlr.o 00:02:15.384 CC lib/nvmf/ctrlr_bdev.o 00:02:15.384 CC lib/nvmf/ctrlr_discovery.o 00:02:15.384 CC lib/ublk/ublk.o 00:02:15.384 CC lib/nbd/nbd.o 00:02:15.384 CC lib/ftl/ftl_core.o 00:02:15.384 CC lib/scsi/dev.o 00:02:15.384 CC lib/scsi/lun.o 00:02:15.384 CC lib/ublk/ublk_rpc.o 00:02:15.641 CC lib/nbd/nbd_rpc.o 00:02:15.641 CC lib/scsi/port.o 00:02:15.641 CC lib/ftl/ftl_init.o 00:02:15.641 CC lib/ftl/ftl_layout.o 00:02:15.641 CC lib/ftl/ftl_debug.o 00:02:15.641 CC lib/scsi/scsi.o 00:02:15.641 LIB libspdk_nbd.a 00:02:15.899 SO libspdk_nbd.so.6.0 00:02:15.899 CC lib/scsi/scsi_bdev.o 00:02:15.899 LIB libspdk_ublk.a 00:02:15.899 SO libspdk_ublk.so.2.0 00:02:15.899 CC lib/scsi/scsi_pr.o 00:02:15.899 CC lib/scsi/scsi_rpc.o 00:02:15.899 SYMLINK libspdk_nbd.so 00:02:15.899 CC lib/scsi/task.o 00:02:15.899 SYMLINK libspdk_ublk.so 00:02:15.899 CC lib/nvmf/subsystem.o 00:02:15.899 CC lib/nvmf/nvmf.o 00:02:15.899 LIB libspdk_blobfs.a 00:02:15.899 SO libspdk_blobfs.so.9.0 00:02:15.899 CC lib/ftl/ftl_io.o 00:02:15.899 SYMLINK libspdk_blobfs.so 00:02:15.899 CC lib/nvmf/nvmf_rpc.o 00:02:15.899 CC lib/nvmf/transport.o 00:02:15.899 CC lib/ftl/ftl_sb.o 00:02:16.157 CC lib/nvmf/tcp.o 00:02:16.157 CC lib/ftl/ftl_l2p.o 00:02:16.157 LIB libspdk_lvol.a 00:02:16.157 SO libspdk_lvol.so.9.1 00:02:16.157 LIB libspdk_scsi.a 00:02:16.157 CC lib/nvmf/rdma.o 00:02:16.157 SYMLINK libspdk_lvol.so 00:02:16.157 CC lib/ftl/ftl_l2p_flat.o 00:02:16.157 SO libspdk_scsi.so.8.0 00:02:16.157 CC lib/ftl/ftl_nv_cache.o 00:02:16.415 SYMLINK libspdk_scsi.so 00:02:16.415 CC lib/ftl/ftl_band.o 00:02:16.415 CC lib/ftl/ftl_band_ops.o 00:02:16.415 CC lib/ftl/ftl_writer.o 00:02:16.415 CC lib/ftl/ftl_rq.o 00:02:16.673 CC lib/ftl/ftl_reloc.o 00:02:16.673 CC lib/ftl/ftl_l2p_cache.o 00:02:16.673 CC lib/ftl/ftl_p2l.o 00:02:16.673 CC lib/ftl/mngt/ftl_mngt.o 00:02:16.673 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:16.931 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:16.931 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:16.931 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:16.931 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:16.931 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:16.931 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:16.931 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:16.931 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:17.189 CC lib/iscsi/conn.o 00:02:17.189 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:17.189 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:17.189 CC lib/vhost/vhost.o 00:02:17.189 CC lib/vhost/vhost_rpc.o 00:02:17.189 CC lib/vhost/vhost_scsi.o 00:02:17.189 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:17.189 CC lib/ftl/utils/ftl_conf.o 00:02:17.189 CC lib/iscsi/init_grp.o 00:02:17.447 CC lib/ftl/utils/ftl_md.o 00:02:17.447 CC lib/ftl/utils/ftl_mempool.o 00:02:17.447 CC lib/ftl/utils/ftl_bitmap.o 00:02:17.447 CC lib/ftl/utils/ftl_property.o 00:02:17.447 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:17.447 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:17.447 CC lib/iscsi/iscsi.o 00:02:17.447 CC lib/iscsi/md5.o 00:02:17.706 CC lib/iscsi/param.o 00:02:17.706 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:17.706 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:17.706 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:17.706 CC lib/iscsi/portal_grp.o 00:02:17.706 CC lib/iscsi/tgt_node.o 00:02:17.706 CC lib/vhost/vhost_blk.o 00:02:17.706 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:17.706 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:17.706 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:17.965 CC lib/vhost/rte_vhost_user.o 00:02:17.965 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:17.965 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:17.965 CC lib/iscsi/iscsi_subsystem.o 00:02:17.965 CC lib/iscsi/iscsi_rpc.o 00:02:17.965 CC lib/ftl/base/ftl_base_dev.o 00:02:17.965 CC lib/ftl/base/ftl_base_bdev.o 00:02:17.965 CC lib/iscsi/task.o 00:02:17.965 LIB libspdk_nvmf.a 00:02:18.226 CC lib/ftl/ftl_trace.o 00:02:18.226 SO libspdk_nvmf.so.17.0 00:02:18.226 LIB libspdk_ftl.a 00:02:18.226 SYMLINK libspdk_nvmf.so 00:02:18.487 SO libspdk_ftl.so.8.0 00:02:18.487 LIB libspdk_vhost.a 00:02:18.748 LIB libspdk_iscsi.a 00:02:18.748 SO libspdk_vhost.so.7.1 00:02:18.748 SO libspdk_iscsi.so.7.0 00:02:18.748 SYMLINK libspdk_ftl.so 00:02:18.748 SYMLINK libspdk_vhost.so 00:02:18.748 SYMLINK libspdk_iscsi.so 00:02:19.011 CC module/env_dpdk/env_dpdk_rpc.o 00:02:19.011 CC module/blob/bdev/blob_bdev.o 00:02:19.011 CC module/sock/posix/posix.o 00:02:19.011 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:19.011 CC module/accel/iaa/accel_iaa.o 00:02:19.011 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:19.011 CC module/accel/dsa/accel_dsa.o 00:02:19.011 CC module/accel/ioat/accel_ioat.o 00:02:19.011 CC module/accel/error/accel_error.o 00:02:19.011 CC module/scheduler/gscheduler/gscheduler.o 00:02:19.011 LIB libspdk_env_dpdk_rpc.a 00:02:19.011 SO libspdk_env_dpdk_rpc.so.5.0 00:02:19.272 LIB libspdk_scheduler_dpdk_governor.a 00:02:19.273 SYMLINK libspdk_env_dpdk_rpc.so 00:02:19.273 CC module/accel/ioat/accel_ioat_rpc.o 00:02:19.273 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:19.273 LIB libspdk_scheduler_dynamic.a 00:02:19.273 CC module/accel/error/accel_error_rpc.o 00:02:19.273 CC module/accel/iaa/accel_iaa_rpc.o 00:02:19.273 CC module/accel/dsa/accel_dsa_rpc.o 00:02:19.273 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:19.273 SO libspdk_scheduler_dynamic.so.3.0 00:02:19.273 LIB libspdk_scheduler_gscheduler.a 00:02:19.273 SO libspdk_scheduler_gscheduler.so.3.0 00:02:19.273 SYMLINK libspdk_scheduler_dynamic.so 00:02:19.273 SYMLINK libspdk_scheduler_gscheduler.so 00:02:19.273 LIB libspdk_accel_ioat.a 00:02:19.273 LIB libspdk_blob_bdev.a 00:02:19.273 SO libspdk_accel_ioat.so.5.0 00:02:19.273 LIB libspdk_accel_iaa.a 00:02:19.273 LIB libspdk_accel_dsa.a 00:02:19.273 LIB libspdk_accel_error.a 00:02:19.273 SO libspdk_accel_dsa.so.4.0 00:02:19.273 SO libspdk_blob_bdev.so.10.1 00:02:19.273 SO libspdk_accel_iaa.so.2.0 00:02:19.273 SYMLINK libspdk_accel_ioat.so 00:02:19.273 SO libspdk_accel_error.so.1.0 00:02:19.273 SYMLINK libspdk_accel_dsa.so 00:02:19.273 SYMLINK libspdk_blob_bdev.so 00:02:19.273 SYMLINK libspdk_accel_iaa.so 00:02:19.273 SYMLINK libspdk_accel_error.so 00:02:19.535 CC module/bdev/delay/vbdev_delay.o 00:02:19.535 CC module/bdev/gpt/gpt.o 00:02:19.535 CC module/bdev/null/bdev_null.o 00:02:19.535 CC module/bdev/error/vbdev_error.o 00:02:19.535 CC module/bdev/malloc/bdev_malloc.o 00:02:19.535 CC module/bdev/passthru/vbdev_passthru.o 00:02:19.535 CC module/bdev/nvme/bdev_nvme.o 00:02:19.535 CC module/blobfs/bdev/blobfs_bdev.o 00:02:19.535 CC module/bdev/lvol/vbdev_lvol.o 00:02:19.535 LIB libspdk_sock_posix.a 00:02:19.535 CC module/bdev/gpt/vbdev_gpt.o 00:02:19.535 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:19.535 SO libspdk_sock_posix.so.5.0 00:02:19.797 CC module/bdev/null/bdev_null_rpc.o 00:02:19.797 CC module/bdev/error/vbdev_error_rpc.o 00:02:19.797 SYMLINK libspdk_sock_posix.so 00:02:19.797 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:19.797 LIB libspdk_blobfs_bdev.a 00:02:19.797 SO libspdk_blobfs_bdev.so.5.0 00:02:19.797 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:19.797 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:19.797 LIB libspdk_bdev_null.a 00:02:19.797 LIB libspdk_bdev_error.a 00:02:19.797 SYMLINK libspdk_blobfs_bdev.so 00:02:19.797 SO libspdk_bdev_null.so.5.0 00:02:19.797 SO libspdk_bdev_error.so.5.0 00:02:19.797 LIB libspdk_bdev_gpt.a 00:02:19.797 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:19.797 SO libspdk_bdev_gpt.so.5.0 00:02:19.797 SYMLINK libspdk_bdev_null.so 00:02:19.797 CC module/bdev/raid/bdev_raid.o 00:02:19.798 SYMLINK libspdk_bdev_error.so 00:02:19.798 LIB libspdk_bdev_malloc.a 00:02:19.798 SYMLINK libspdk_bdev_gpt.so 00:02:19.798 LIB libspdk_bdev_passthru.a 00:02:19.798 CC module/bdev/nvme/nvme_rpc.o 00:02:19.798 SO libspdk_bdev_passthru.so.5.0 00:02:19.798 SO libspdk_bdev_malloc.so.5.0 00:02:20.059 CC module/bdev/split/vbdev_split.o 00:02:20.059 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:20.059 SYMLINK libspdk_bdev_malloc.so 00:02:20.059 SYMLINK libspdk_bdev_passthru.so 00:02:20.059 CC module/bdev/split/vbdev_split_rpc.o 00:02:20.059 CC module/bdev/nvme/bdev_mdns_client.o 00:02:20.059 LIB libspdk_bdev_delay.a 00:02:20.059 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:20.059 SO libspdk_bdev_delay.so.5.0 00:02:20.059 SYMLINK libspdk_bdev_delay.so 00:02:20.059 CC module/bdev/nvme/vbdev_opal.o 00:02:20.059 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:20.059 CC module/bdev/raid/bdev_raid_rpc.o 00:02:20.059 CC module/bdev/xnvme/bdev_xnvme.o 00:02:20.059 LIB libspdk_bdev_split.a 00:02:20.059 SO libspdk_bdev_split.so.5.0 00:02:20.320 CC module/bdev/raid/bdev_raid_sb.o 00:02:20.320 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:20.320 SYMLINK libspdk_bdev_split.so 00:02:20.320 CC module/bdev/raid/raid0.o 00:02:20.320 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:20.320 CC module/bdev/raid/raid1.o 00:02:20.320 LIB libspdk_bdev_lvol.a 00:02:20.320 CC module/bdev/raid/concat.o 00:02:20.320 SO libspdk_bdev_lvol.so.5.0 00:02:20.320 LIB libspdk_bdev_zone_block.a 00:02:20.320 SO libspdk_bdev_zone_block.so.5.0 00:02:20.320 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:20.321 SYMLINK libspdk_bdev_lvol.so 00:02:20.321 SYMLINK libspdk_bdev_zone_block.so 00:02:20.321 CC module/bdev/aio/bdev_aio.o 00:02:20.582 CC module/bdev/iscsi/bdev_iscsi.o 00:02:20.582 CC module/bdev/aio/bdev_aio_rpc.o 00:02:20.582 CC module/bdev/ftl/bdev_ftl.o 00:02:20.582 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:20.582 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:20.582 LIB libspdk_bdev_xnvme.a 00:02:20.582 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:20.582 SO libspdk_bdev_xnvme.so.2.0 00:02:20.582 SYMLINK libspdk_bdev_xnvme.so 00:02:20.582 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:20.582 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:20.582 LIB libspdk_bdev_raid.a 00:02:20.582 SO libspdk_bdev_raid.so.5.0 00:02:20.582 LIB libspdk_bdev_ftl.a 00:02:20.582 SO libspdk_bdev_ftl.so.5.0 00:02:20.582 SYMLINK libspdk_bdev_raid.so 00:02:20.844 LIB libspdk_bdev_aio.a 00:02:20.844 SYMLINK libspdk_bdev_ftl.so 00:02:20.844 SO libspdk_bdev_aio.so.5.0 00:02:20.844 SYMLINK libspdk_bdev_aio.so 00:02:20.844 LIB libspdk_bdev_iscsi.a 00:02:20.844 SO libspdk_bdev_iscsi.so.5.0 00:02:20.844 SYMLINK libspdk_bdev_iscsi.so 00:02:20.844 LIB libspdk_bdev_virtio.a 00:02:21.106 SO libspdk_bdev_virtio.so.5.0 00:02:21.106 SYMLINK libspdk_bdev_virtio.so 00:02:21.365 LIB libspdk_bdev_nvme.a 00:02:21.626 SO libspdk_bdev_nvme.so.6.0 00:02:21.626 SYMLINK libspdk_bdev_nvme.so 00:02:21.886 CC module/event/subsystems/sock/sock.o 00:02:21.887 CC module/event/subsystems/scheduler/scheduler.o 00:02:21.887 CC module/event/subsystems/iobuf/iobuf.o 00:02:21.887 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:21.887 CC module/event/subsystems/vmd/vmd.o 00:02:21.887 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:21.887 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:21.887 LIB libspdk_event_sock.a 00:02:21.887 LIB libspdk_event_scheduler.a 00:02:21.887 LIB libspdk_event_vhost_blk.a 00:02:21.887 SO libspdk_event_sock.so.4.0 00:02:21.887 LIB libspdk_event_vmd.a 00:02:21.887 SO libspdk_event_scheduler.so.3.0 00:02:21.887 SO libspdk_event_vhost_blk.so.2.0 00:02:21.887 SO libspdk_event_vmd.so.5.0 00:02:21.887 LIB libspdk_event_iobuf.a 00:02:21.887 SYMLINK libspdk_event_sock.so 00:02:21.887 SO libspdk_event_iobuf.so.2.0 00:02:21.887 SYMLINK libspdk_event_scheduler.so 00:02:21.887 SYMLINK libspdk_event_vhost_blk.so 00:02:22.147 SYMLINK libspdk_event_vmd.so 00:02:22.147 SYMLINK libspdk_event_iobuf.so 00:02:22.147 CC module/event/subsystems/accel/accel.o 00:02:22.406 LIB libspdk_event_accel.a 00:02:22.406 SO libspdk_event_accel.so.5.0 00:02:22.406 SYMLINK libspdk_event_accel.so 00:02:22.667 CC module/event/subsystems/bdev/bdev.o 00:02:22.667 LIB libspdk_event_bdev.a 00:02:22.667 SO libspdk_event_bdev.so.5.0 00:02:22.667 SYMLINK libspdk_event_bdev.so 00:02:22.928 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:22.928 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:22.928 CC module/event/subsystems/scsi/scsi.o 00:02:22.928 CC module/event/subsystems/ublk/ublk.o 00:02:22.928 CC module/event/subsystems/nbd/nbd.o 00:02:22.928 LIB libspdk_event_scsi.a 00:02:22.928 LIB libspdk_event_ublk.a 00:02:22.928 LIB libspdk_event_nbd.a 00:02:22.928 SO libspdk_event_ublk.so.2.0 00:02:22.928 SO libspdk_event_scsi.so.5.0 00:02:22.928 SO libspdk_event_nbd.so.5.0 00:02:22.928 LIB libspdk_event_nvmf.a 00:02:22.928 SYMLINK libspdk_event_ublk.so 00:02:22.928 SYMLINK libspdk_event_scsi.so 00:02:23.188 SYMLINK libspdk_event_nbd.so 00:02:23.188 SO libspdk_event_nvmf.so.5.0 00:02:23.188 SYMLINK libspdk_event_nvmf.so 00:02:23.188 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:23.188 CC module/event/subsystems/iscsi/iscsi.o 00:02:23.188 LIB libspdk_event_vhost_scsi.a 00:02:23.188 SO libspdk_event_vhost_scsi.so.2.0 00:02:23.188 LIB libspdk_event_iscsi.a 00:02:23.446 SO libspdk_event_iscsi.so.5.0 00:02:23.446 SYMLINK libspdk_event_vhost_scsi.so 00:02:23.446 SYMLINK libspdk_event_iscsi.so 00:02:23.446 SO libspdk.so.5.0 00:02:23.446 SYMLINK libspdk.so 00:02:23.446 CC app/trace_record/trace_record.o 00:02:23.446 TEST_HEADER include/spdk/accel.h 00:02:23.446 CXX app/trace/trace.o 00:02:23.705 TEST_HEADER include/spdk/accel_module.h 00:02:23.705 TEST_HEADER include/spdk/assert.h 00:02:23.705 TEST_HEADER include/spdk/barrier.h 00:02:23.705 TEST_HEADER include/spdk/base64.h 00:02:23.705 TEST_HEADER include/spdk/bdev.h 00:02:23.705 TEST_HEADER include/spdk/bdev_module.h 00:02:23.705 TEST_HEADER include/spdk/bdev_zone.h 00:02:23.705 TEST_HEADER include/spdk/bit_array.h 00:02:23.705 TEST_HEADER include/spdk/bit_pool.h 00:02:23.705 TEST_HEADER include/spdk/blob_bdev.h 00:02:23.705 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:23.705 TEST_HEADER include/spdk/blobfs.h 00:02:23.705 TEST_HEADER include/spdk/blob.h 00:02:23.705 TEST_HEADER include/spdk/conf.h 00:02:23.705 TEST_HEADER include/spdk/config.h 00:02:23.705 TEST_HEADER include/spdk/cpuset.h 00:02:23.705 CC app/iscsi_tgt/iscsi_tgt.o 00:02:23.705 TEST_HEADER include/spdk/crc16.h 00:02:23.705 TEST_HEADER include/spdk/crc32.h 00:02:23.705 TEST_HEADER include/spdk/crc64.h 00:02:23.705 CC app/nvmf_tgt/nvmf_main.o 00:02:23.705 TEST_HEADER include/spdk/dif.h 00:02:23.705 TEST_HEADER include/spdk/dma.h 00:02:23.705 TEST_HEADER include/spdk/endian.h 00:02:23.705 TEST_HEADER include/spdk/env_dpdk.h 00:02:23.705 CC examples/accel/perf/accel_perf.o 00:02:23.705 TEST_HEADER include/spdk/env.h 00:02:23.705 TEST_HEADER include/spdk/event.h 00:02:23.705 TEST_HEADER include/spdk/fd_group.h 00:02:23.705 TEST_HEADER include/spdk/fd.h 00:02:23.705 TEST_HEADER include/spdk/file.h 00:02:23.705 TEST_HEADER include/spdk/ftl.h 00:02:23.705 TEST_HEADER include/spdk/gpt_spec.h 00:02:23.705 TEST_HEADER include/spdk/hexlify.h 00:02:23.705 TEST_HEADER include/spdk/histogram_data.h 00:02:23.705 CC test/blobfs/mkfs/mkfs.o 00:02:23.705 TEST_HEADER include/spdk/idxd.h 00:02:23.705 TEST_HEADER include/spdk/idxd_spec.h 00:02:23.705 TEST_HEADER include/spdk/init.h 00:02:23.705 TEST_HEADER include/spdk/ioat.h 00:02:23.705 TEST_HEADER include/spdk/ioat_spec.h 00:02:23.705 TEST_HEADER include/spdk/iscsi_spec.h 00:02:23.705 TEST_HEADER include/spdk/json.h 00:02:23.705 CC test/bdev/bdevio/bdevio.o 00:02:23.705 TEST_HEADER include/spdk/jsonrpc.h 00:02:23.705 TEST_HEADER include/spdk/likely.h 00:02:23.705 CC test/app/bdev_svc/bdev_svc.o 00:02:23.705 TEST_HEADER include/spdk/log.h 00:02:23.705 TEST_HEADER include/spdk/lvol.h 00:02:23.705 TEST_HEADER include/spdk/memory.h 00:02:23.706 TEST_HEADER include/spdk/mmio.h 00:02:23.706 TEST_HEADER include/spdk/nbd.h 00:02:23.706 TEST_HEADER include/spdk/notify.h 00:02:23.706 TEST_HEADER include/spdk/nvme.h 00:02:23.706 TEST_HEADER include/spdk/nvme_intel.h 00:02:23.706 CC test/accel/dif/dif.o 00:02:23.706 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:23.706 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:23.706 TEST_HEADER include/spdk/nvme_spec.h 00:02:23.706 TEST_HEADER include/spdk/nvme_zns.h 00:02:23.706 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:23.706 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:23.706 TEST_HEADER include/spdk/nvmf.h 00:02:23.706 TEST_HEADER include/spdk/nvmf_spec.h 00:02:23.706 TEST_HEADER include/spdk/nvmf_transport.h 00:02:23.706 TEST_HEADER include/spdk/opal.h 00:02:23.706 TEST_HEADER include/spdk/opal_spec.h 00:02:23.706 TEST_HEADER include/spdk/pci_ids.h 00:02:23.706 TEST_HEADER include/spdk/pipe.h 00:02:23.706 TEST_HEADER include/spdk/queue.h 00:02:23.706 TEST_HEADER include/spdk/reduce.h 00:02:23.706 TEST_HEADER include/spdk/rpc.h 00:02:23.706 TEST_HEADER include/spdk/scheduler.h 00:02:23.706 TEST_HEADER include/spdk/scsi.h 00:02:23.706 TEST_HEADER include/spdk/scsi_spec.h 00:02:23.706 TEST_HEADER include/spdk/sock.h 00:02:23.706 TEST_HEADER include/spdk/stdinc.h 00:02:23.706 TEST_HEADER include/spdk/string.h 00:02:23.706 TEST_HEADER include/spdk/thread.h 00:02:23.706 TEST_HEADER include/spdk/trace.h 00:02:23.706 TEST_HEADER include/spdk/trace_parser.h 00:02:23.706 TEST_HEADER include/spdk/tree.h 00:02:23.706 TEST_HEADER include/spdk/ublk.h 00:02:23.706 TEST_HEADER include/spdk/util.h 00:02:23.706 TEST_HEADER include/spdk/uuid.h 00:02:23.706 TEST_HEADER include/spdk/version.h 00:02:23.706 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:23.706 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:23.706 TEST_HEADER include/spdk/vhost.h 00:02:23.706 TEST_HEADER include/spdk/vmd.h 00:02:23.706 TEST_HEADER include/spdk/xor.h 00:02:23.706 TEST_HEADER include/spdk/zipf.h 00:02:23.706 CXX test/cpp_headers/accel.o 00:02:23.706 LINK spdk_trace_record 00:02:23.706 LINK mkfs 00:02:23.706 LINK nvmf_tgt 00:02:23.706 LINK iscsi_tgt 00:02:23.706 LINK bdev_svc 00:02:23.966 CXX test/cpp_headers/accel_module.o 00:02:23.966 CXX test/cpp_headers/assert.o 00:02:23.966 LINK spdk_trace 00:02:23.966 LINK bdevio 00:02:23.966 CC app/spdk_tgt/spdk_tgt.o 00:02:23.966 CXX test/cpp_headers/barrier.o 00:02:23.966 CC examples/blob/hello_world/hello_blob.o 00:02:23.966 LINK dif 00:02:23.966 CC examples/bdev/hello_world/hello_bdev.o 00:02:23.966 CC test/app/histogram_perf/histogram_perf.o 00:02:23.966 LINK accel_perf 00:02:23.966 CC app/spdk_lspci/spdk_lspci.o 00:02:23.966 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:24.224 CC app/spdk_nvme_perf/perf.o 00:02:24.224 CXX test/cpp_headers/base64.o 00:02:24.224 LINK spdk_tgt 00:02:24.224 LINK histogram_perf 00:02:24.224 LINK spdk_lspci 00:02:24.224 CC app/spdk_nvme_identify/identify.o 00:02:24.224 CC test/app/jsoncat/jsoncat.o 00:02:24.224 LINK hello_blob 00:02:24.224 CXX test/cpp_headers/bdev.o 00:02:24.224 LINK hello_bdev 00:02:24.224 CXX test/cpp_headers/bdev_module.o 00:02:24.482 CC test/app/stub/stub.o 00:02:24.482 LINK jsoncat 00:02:24.482 CC examples/blob/cli/blobcli.o 00:02:24.482 CXX test/cpp_headers/bdev_zone.o 00:02:24.482 CXX test/cpp_headers/bit_array.o 00:02:24.482 CXX test/cpp_headers/bit_pool.o 00:02:24.482 LINK nvme_fuzz 00:02:24.482 CC examples/bdev/bdevperf/bdevperf.o 00:02:24.482 CXX test/cpp_headers/blob_bdev.o 00:02:24.482 LINK stub 00:02:24.482 CC app/spdk_nvme_discover/discovery_aer.o 00:02:24.482 CXX test/cpp_headers/blobfs_bdev.o 00:02:24.482 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:24.742 CXX test/cpp_headers/blobfs.o 00:02:24.742 CC test/dma/test_dma/test_dma.o 00:02:24.742 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:24.742 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:24.742 LINK spdk_nvme_discover 00:02:24.742 LINK blobcli 00:02:24.742 CXX test/cpp_headers/blob.o 00:02:24.742 CXX test/cpp_headers/conf.o 00:02:24.742 CC examples/ioat/perf/perf.o 00:02:25.003 CC examples/ioat/verify/verify.o 00:02:25.003 CXX test/cpp_headers/config.o 00:02:25.003 LINK spdk_nvme_perf 00:02:25.003 LINK test_dma 00:02:25.003 CXX test/cpp_headers/cpuset.o 00:02:25.003 CC app/spdk_top/spdk_top.o 00:02:25.003 LINK spdk_nvme_identify 00:02:25.003 LINK ioat_perf 00:02:25.003 LINK bdevperf 00:02:25.003 CXX test/cpp_headers/crc16.o 00:02:25.003 LINK verify 00:02:25.003 LINK vhost_fuzz 00:02:25.003 CXX test/cpp_headers/crc32.o 00:02:25.265 CC app/vhost/vhost.o 00:02:25.265 CC examples/nvme/reconnect/reconnect.o 00:02:25.265 CC examples/sock/hello_world/hello_sock.o 00:02:25.265 CC examples/nvme/hello_world/hello_world.o 00:02:25.265 CXX test/cpp_headers/crc64.o 00:02:25.265 CC examples/vmd/lsvmd/lsvmd.o 00:02:25.265 CC test/event/event_perf/event_perf.o 00:02:25.265 CC test/env/mem_callbacks/mem_callbacks.o 00:02:25.265 LINK vhost 00:02:25.265 CXX test/cpp_headers/dif.o 00:02:25.526 LINK event_perf 00:02:25.526 LINK lsvmd 00:02:25.526 LINK hello_sock 00:02:25.526 LINK hello_world 00:02:25.526 CXX test/cpp_headers/dma.o 00:02:25.526 LINK reconnect 00:02:25.526 CC examples/vmd/led/led.o 00:02:25.526 CC test/event/reactor/reactor.o 00:02:25.526 CC test/env/vtophys/vtophys.o 00:02:25.526 CXX test/cpp_headers/endian.o 00:02:25.526 LINK led 00:02:25.787 LINK reactor 00:02:25.787 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:25.787 CC examples/nvmf/nvmf/nvmf.o 00:02:25.787 CC examples/util/zipf/zipf.o 00:02:25.787 LINK vtophys 00:02:25.787 CXX test/cpp_headers/env_dpdk.o 00:02:25.787 LINK zipf 00:02:25.787 LINK mem_callbacks 00:02:25.787 CC test/event/reactor_perf/reactor_perf.o 00:02:25.787 LINK spdk_top 00:02:25.787 CC test/event/app_repeat/app_repeat.o 00:02:25.787 CXX test/cpp_headers/env.o 00:02:25.787 CXX test/cpp_headers/event.o 00:02:25.787 CC test/lvol/esnap/esnap.o 00:02:25.787 LINK nvmf 00:02:26.049 LINK reactor_perf 00:02:26.049 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:26.049 LINK app_repeat 00:02:26.049 CXX test/cpp_headers/fd_group.o 00:02:26.049 CC app/spdk_dd/spdk_dd.o 00:02:26.049 LINK nvme_manage 00:02:26.049 LINK env_dpdk_post_init 00:02:26.049 CC test/nvme/aer/aer.o 00:02:26.049 CC test/nvme/reset/reset.o 00:02:26.049 CXX test/cpp_headers/fd.o 00:02:26.308 LINK iscsi_fuzz 00:02:26.308 CC examples/thread/thread/thread_ex.o 00:02:26.308 CC test/event/scheduler/scheduler.o 00:02:26.308 CC examples/nvme/arbitration/arbitration.o 00:02:26.308 CC test/env/memory/memory_ut.o 00:02:26.308 CXX test/cpp_headers/file.o 00:02:26.308 LINK aer 00:02:26.308 LINK reset 00:02:26.308 LINK scheduler 00:02:26.308 LINK thread 00:02:26.308 LINK spdk_dd 00:02:26.308 CXX test/cpp_headers/ftl.o 00:02:26.566 CC test/env/pci/pci_ut.o 00:02:26.566 CC test/nvme/sgl/sgl.o 00:02:26.566 LINK arbitration 00:02:26.566 CC test/rpc_client/rpc_client_test.o 00:02:26.566 CXX test/cpp_headers/gpt_spec.o 00:02:26.566 CC app/fio/nvme/fio_plugin.o 00:02:26.566 CC examples/idxd/perf/perf.o 00:02:26.566 CC app/fio/bdev/fio_plugin.o 00:02:26.566 CC examples/nvme/hotplug/hotplug.o 00:02:26.566 LINK sgl 00:02:26.836 CXX test/cpp_headers/hexlify.o 00:02:26.836 LINK rpc_client_test 00:02:26.836 LINK pci_ut 00:02:26.836 CXX test/cpp_headers/histogram_data.o 00:02:26.836 LINK hotplug 00:02:26.836 CC test/nvme/e2edp/nvme_dp.o 00:02:26.836 CC test/nvme/overhead/overhead.o 00:02:26.836 LINK idxd_perf 00:02:27.108 CXX test/cpp_headers/idxd.o 00:02:27.108 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:27.108 LINK nvme_dp 00:02:27.108 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:27.108 CXX test/cpp_headers/idxd_spec.o 00:02:27.108 LINK memory_ut 00:02:27.108 CXX test/cpp_headers/init.o 00:02:27.108 LINK overhead 00:02:27.108 LINK spdk_bdev 00:02:27.108 CXX test/cpp_headers/ioat.o 00:02:27.108 LINK interrupt_tgt 00:02:27.108 LINK spdk_nvme 00:02:27.108 CXX test/cpp_headers/ioat_spec.o 00:02:27.108 LINK cmb_copy 00:02:27.108 CXX test/cpp_headers/iscsi_spec.o 00:02:27.108 CXX test/cpp_headers/json.o 00:02:27.366 CC examples/nvme/abort/abort.o 00:02:27.366 CXX test/cpp_headers/jsonrpc.o 00:02:27.366 CXX test/cpp_headers/likely.o 00:02:27.366 CC test/nvme/err_injection/err_injection.o 00:02:27.366 CXX test/cpp_headers/log.o 00:02:27.366 CXX test/cpp_headers/lvol.o 00:02:27.366 CXX test/cpp_headers/memory.o 00:02:27.366 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:27.366 CXX test/cpp_headers/mmio.o 00:02:27.366 CC test/nvme/startup/startup.o 00:02:27.366 CXX test/cpp_headers/nbd.o 00:02:27.366 CXX test/cpp_headers/notify.o 00:02:27.366 CXX test/cpp_headers/nvme.o 00:02:27.366 CXX test/cpp_headers/nvme_intel.o 00:02:27.366 LINK err_injection 00:02:27.366 CXX test/cpp_headers/nvme_ocssd.o 00:02:27.366 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:27.366 LINK pmr_persistence 00:02:27.624 LINK startup 00:02:27.624 CXX test/cpp_headers/nvme_spec.o 00:02:27.624 CXX test/cpp_headers/nvme_zns.o 00:02:27.624 CXX test/cpp_headers/nvmf_cmd.o 00:02:27.624 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:27.624 CXX test/cpp_headers/nvmf.o 00:02:27.624 CXX test/cpp_headers/nvmf_spec.o 00:02:27.624 LINK abort 00:02:27.624 CXX test/cpp_headers/nvmf_transport.o 00:02:27.624 CXX test/cpp_headers/opal.o 00:02:27.624 CXX test/cpp_headers/opal_spec.o 00:02:27.624 CC test/nvme/reserve/reserve.o 00:02:27.624 CXX test/cpp_headers/pci_ids.o 00:02:27.624 CXX test/cpp_headers/pipe.o 00:02:27.885 CXX test/cpp_headers/queue.o 00:02:27.885 CXX test/cpp_headers/reduce.o 00:02:27.885 CC test/nvme/connect_stress/connect_stress.o 00:02:27.885 CC test/nvme/simple_copy/simple_copy.o 00:02:27.885 CXX test/cpp_headers/rpc.o 00:02:27.885 CC test/thread/poller_perf/poller_perf.o 00:02:27.885 CXX test/cpp_headers/scheduler.o 00:02:27.885 CXX test/cpp_headers/scsi.o 00:02:27.885 CXX test/cpp_headers/scsi_spec.o 00:02:27.885 LINK reserve 00:02:27.885 LINK poller_perf 00:02:27.885 CXX test/cpp_headers/sock.o 00:02:27.885 LINK connect_stress 00:02:27.885 CXX test/cpp_headers/stdinc.o 00:02:27.885 CC test/nvme/boot_partition/boot_partition.o 00:02:27.885 CXX test/cpp_headers/string.o 00:02:27.885 CC test/nvme/compliance/nvme_compliance.o 00:02:27.885 LINK simple_copy 00:02:28.146 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:28.146 CC test/nvme/fused_ordering/fused_ordering.o 00:02:28.146 CXX test/cpp_headers/thread.o 00:02:28.146 LINK boot_partition 00:02:28.146 CXX test/cpp_headers/trace.o 00:02:28.146 CXX test/cpp_headers/trace_parser.o 00:02:28.146 CXX test/cpp_headers/tree.o 00:02:28.146 CC test/nvme/fdp/fdp.o 00:02:28.146 CXX test/cpp_headers/ublk.o 00:02:28.146 CXX test/cpp_headers/util.o 00:02:28.146 LINK fused_ordering 00:02:28.146 CXX test/cpp_headers/uuid.o 00:02:28.146 LINK doorbell_aers 00:02:28.146 CXX test/cpp_headers/version.o 00:02:28.146 CXX test/cpp_headers/vfio_user_pci.o 00:02:28.146 CXX test/cpp_headers/vfio_user_spec.o 00:02:28.407 CXX test/cpp_headers/vhost.o 00:02:28.407 LINK nvme_compliance 00:02:28.407 CXX test/cpp_headers/vmd.o 00:02:28.407 CXX test/cpp_headers/xor.o 00:02:28.407 CXX test/cpp_headers/zipf.o 00:02:28.407 CC test/nvme/cuse/cuse.o 00:02:28.407 LINK fdp 00:02:29.351 LINK cuse 00:02:29.922 LINK esnap 00:02:30.183 00:02:30.183 real 0m49.218s 00:02:30.183 user 4m53.454s 00:02:30.183 sys 0m59.304s 00:02:30.183 10:30:00 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:30.183 ************************************ 00:02:30.183 END TEST make 00:02:30.183 ************************************ 00:02:30.183 10:30:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.183 10:30:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:30.183 10:30:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:30.183 10:30:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:30.183 10:30:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:30.183 10:30:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:30.183 10:30:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:30.183 10:30:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:30.183 10:30:00 -- scripts/common.sh@335 -- # IFS=.-: 00:02:30.183 10:30:00 -- scripts/common.sh@335 -- # read -ra ver1 00:02:30.183 10:30:00 -- scripts/common.sh@336 -- # IFS=.-: 00:02:30.183 10:30:00 -- scripts/common.sh@336 -- # read -ra ver2 00:02:30.183 10:30:00 -- scripts/common.sh@337 -- # local 'op=<' 00:02:30.183 10:30:00 -- scripts/common.sh@339 -- # ver1_l=2 00:02:30.183 10:30:00 -- scripts/common.sh@340 -- # ver2_l=1 00:02:30.183 10:30:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:30.183 10:30:00 -- scripts/common.sh@343 -- # case "$op" in 00:02:30.183 10:30:00 -- scripts/common.sh@344 -- # : 1 00:02:30.183 10:30:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:30.183 10:30:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.183 10:30:00 -- scripts/common.sh@364 -- # decimal 1 00:02:30.183 10:30:00 -- scripts/common.sh@352 -- # local d=1 00:02:30.183 10:30:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:30.183 10:30:00 -- scripts/common.sh@354 -- # echo 1 00:02:30.183 10:30:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:30.183 10:30:00 -- scripts/common.sh@365 -- # decimal 2 00:02:30.183 10:30:00 -- scripts/common.sh@352 -- # local d=2 00:02:30.183 10:30:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:30.183 10:30:00 -- scripts/common.sh@354 -- # echo 2 00:02:30.183 10:30:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:30.183 10:30:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:30.183 10:30:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:30.183 10:30:00 -- scripts/common.sh@367 -- # return 0 00:02:30.183 10:30:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:30.183 10:30:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:30.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.183 --rc genhtml_branch_coverage=1 00:02:30.183 --rc genhtml_function_coverage=1 00:02:30.183 --rc genhtml_legend=1 00:02:30.183 --rc geninfo_all_blocks=1 00:02:30.183 --rc geninfo_unexecuted_blocks=1 00:02:30.183 00:02:30.183 ' 00:02:30.183 10:30:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:30.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.183 --rc genhtml_branch_coverage=1 00:02:30.183 --rc genhtml_function_coverage=1 00:02:30.183 --rc genhtml_legend=1 00:02:30.183 --rc geninfo_all_blocks=1 00:02:30.183 --rc geninfo_unexecuted_blocks=1 00:02:30.183 00:02:30.183 ' 00:02:30.183 10:30:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:30.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.183 --rc genhtml_branch_coverage=1 00:02:30.183 --rc genhtml_function_coverage=1 00:02:30.183 --rc genhtml_legend=1 00:02:30.183 --rc geninfo_all_blocks=1 00:02:30.183 --rc geninfo_unexecuted_blocks=1 00:02:30.183 00:02:30.183 ' 00:02:30.183 10:30:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:30.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.183 --rc genhtml_branch_coverage=1 00:02:30.183 --rc genhtml_function_coverage=1 00:02:30.183 --rc genhtml_legend=1 00:02:30.183 --rc geninfo_all_blocks=1 00:02:30.183 --rc geninfo_unexecuted_blocks=1 00:02:30.183 00:02:30.183 ' 00:02:30.183 10:30:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:30.183 10:30:00 -- nvmf/common.sh@7 -- # uname -s 00:02:30.183 10:30:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:30.183 10:30:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:30.183 10:30:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:30.183 10:30:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:30.183 10:30:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:30.183 10:30:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:30.183 10:30:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:30.183 10:30:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:30.183 10:30:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:30.183 10:30:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:30.445 10:30:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15b22541-b866-4b77-a57b-12205cff22be 00:02:30.445 10:30:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=15b22541-b866-4b77-a57b-12205cff22be 00:02:30.445 10:30:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:30.445 10:30:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:30.445 10:30:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:30.445 10:30:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:30.445 10:30:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:30.445 10:30:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:30.445 10:30:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:30.445 10:30:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.445 10:30:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.445 10:30:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.445 10:30:00 -- paths/export.sh@5 -- # export PATH 00:02:30.445 10:30:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.445 10:30:00 -- nvmf/common.sh@46 -- # : 0 00:02:30.445 10:30:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:30.445 10:30:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:30.445 10:30:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:30.445 10:30:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:30.445 10:30:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:30.445 10:30:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:30.445 10:30:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:30.445 10:30:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:30.445 10:30:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:30.445 10:30:00 -- spdk/autotest.sh@32 -- # uname -s 00:02:30.445 10:30:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:30.445 10:30:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:30.445 10:30:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:30.445 10:30:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:30.445 10:30:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:30.445 10:30:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:30.445 10:30:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:30.445 10:30:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:30.445 10:30:00 -- spdk/autotest.sh@48 -- # udevadm_pid=48144 00:02:30.445 10:30:00 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:02:30.445 10:30:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:30.445 10:30:00 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:30.445 10:30:00 -- spdk/autotest.sh@54 -- # echo 48171 00:02:30.445 10:30:00 -- spdk/autotest.sh@56 -- # echo 48176 00:02:30.445 10:30:00 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:30.445 10:30:00 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:02:30.445 10:30:00 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:30.445 10:30:00 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:30.445 10:30:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:30.445 10:30:00 -- common/autotest_common.sh@10 -- # set +x 00:02:30.445 10:30:00 -- spdk/autotest.sh@70 -- # create_test_list 00:02:30.445 10:30:00 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:30.445 10:30:00 -- common/autotest_common.sh@10 -- # set +x 00:02:30.445 10:30:00 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:30.445 10:30:00 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:30.445 10:30:00 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:02:30.445 10:30:00 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:30.445 10:30:00 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:02:30.445 10:30:00 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:30.445 10:30:00 -- common/autotest_common.sh@1450 -- # uname 00:02:30.445 10:30:00 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:30.445 10:30:00 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:30.445 10:30:00 -- common/autotest_common.sh@1470 -- # uname 00:02:30.445 10:30:00 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:30.445 10:30:00 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:30.445 10:30:00 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:30.445 lcov: LCOV version 1.15 00:02:30.446 10:30:00 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:02:38.583 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:38.583 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:38.583 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:38.583 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:38.583 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:38.583 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:56.790 10:30:27 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:02:56.790 10:30:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:56.790 10:30:27 -- common/autotest_common.sh@10 -- # set +x 00:02:56.790 10:30:27 -- spdk/autotest.sh@89 -- # rm -f 00:02:56.790 10:30:27 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:02:57.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:57.752 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:02:57.752 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:02:57.752 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:02:57.752 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:02:58.014 10:30:28 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:02:58.014 10:30:28 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:02:58.014 10:30:28 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:02:58.014 10:30:28 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:02:58.014 10:30:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:58.014 10:30:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:02:58.014 10:30:28 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:02:58.014 10:30:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:58.014 10:30:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:58.014 10:30:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:58.014 10:30:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:02:58.014 10:30:28 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:02:58.014 10:30:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:58.014 10:30:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:58.014 10:30:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:58.014 10:30:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:02:58.014 10:30:28 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:02:58.014 10:30:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:02:58.014 10:30:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:58.014 10:30:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:58.014 10:30:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:02:58.014 10:30:28 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:02:58.014 10:30:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:02:58.014 10:30:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:58.014 10:30:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:58.014 10:30:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2c2n1 00:02:58.014 10:30:28 -- common/autotest_common.sh@1657 -- # local device=nvme2c2n1 00:02:58.014 10:30:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:02:58.014 10:30:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:58.014 10:30:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:58.014 10:30:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:02:58.014 10:30:28 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:02:58.014 10:30:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:58.014 10:30:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:58.014 10:30:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:58.014 10:30:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:02:58.014 10:30:28 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:02:58.014 10:30:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:02:58.014 10:30:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:58.014 10:30:28 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:02:58.014 10:30:28 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 /dev/nvme2n1 /dev/nvme3n1 00:02:58.014 10:30:28 -- spdk/autotest.sh@108 -- # grep -v p 00:02:58.014 10:30:28 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:58.014 10:30:28 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:58.014 10:30:28 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:02:58.014 10:30:28 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:58.014 10:30:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:58.014 No valid GPT data, bailing 00:02:58.014 10:30:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:58.014 10:30:28 -- scripts/common.sh@393 -- # pt= 00:02:58.014 10:30:28 -- scripts/common.sh@394 -- # return 1 00:02:58.014 10:30:28 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:58.014 1+0 records in 00:02:58.014 1+0 records out 00:02:58.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00612342 s, 171 MB/s 00:02:58.014 10:30:28 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:58.014 10:30:28 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:58.014 10:30:28 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:02:58.014 10:30:28 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:02:58.014 10:30:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:58.014 No valid GPT data, bailing 00:02:58.014 10:30:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:58.014 10:30:28 -- scripts/common.sh@393 -- # pt= 00:02:58.014 10:30:28 -- scripts/common.sh@394 -- # return 1 00:02:58.014 10:30:28 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:58.014 1+0 records in 00:02:58.014 1+0 records out 00:02:58.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524698 s, 200 MB/s 00:02:58.014 10:30:28 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:58.014 10:30:28 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:58.014 10:30:28 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:02:58.014 10:30:28 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:02:58.014 10:30:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:02:58.014 No valid GPT data, bailing 00:02:58.276 10:30:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:02:58.276 10:30:28 -- scripts/common.sh@393 -- # pt= 00:02:58.276 10:30:28 -- scripts/common.sh@394 -- # return 1 00:02:58.276 10:30:28 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:02:58.276 1+0 records in 00:02:58.276 1+0 records out 00:02:58.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551491 s, 190 MB/s 00:02:58.276 10:30:28 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:58.276 10:30:28 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:58.276 10:30:28 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:02:58.276 10:30:28 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:02:58.276 10:30:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:02:58.276 No valid GPT data, bailing 00:02:58.276 10:30:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:02:58.276 10:30:28 -- scripts/common.sh@393 -- # pt= 00:02:58.276 10:30:28 -- scripts/common.sh@394 -- # return 1 00:02:58.276 10:30:28 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:02:58.276 1+0 records in 00:02:58.276 1+0 records out 00:02:58.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00642081 s, 163 MB/s 00:02:58.276 10:30:28 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:58.276 10:30:28 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:58.276 10:30:28 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:02:58.276 10:30:28 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:02:58.276 10:30:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:02:58.276 No valid GPT data, bailing 00:02:58.276 10:30:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:02:58.276 10:30:28 -- scripts/common.sh@393 -- # pt= 00:02:58.276 10:30:28 -- scripts/common.sh@394 -- # return 1 00:02:58.276 10:30:28 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:02:58.276 1+0 records in 00:02:58.276 1+0 records out 00:02:58.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00675753 s, 155 MB/s 00:02:58.276 10:30:28 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:58.276 10:30:28 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:58.276 10:30:28 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:02:58.276 10:30:28 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:02:58.276 10:30:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:02:58.276 No valid GPT data, bailing 00:02:58.536 10:30:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:02:58.536 10:30:28 -- scripts/common.sh@393 -- # pt= 00:02:58.536 10:30:28 -- scripts/common.sh@394 -- # return 1 00:02:58.536 10:30:28 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:02:58.536 1+0 records in 00:02:58.536 1+0 records out 00:02:58.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0340791 s, 30.8 MB/s 00:02:58.536 10:30:28 -- spdk/autotest.sh@116 -- # sync 00:02:58.536 10:30:29 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:58.536 10:30:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:58.536 10:30:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:00.455 10:30:30 -- spdk/autotest.sh@122 -- # uname -s 00:03:00.455 10:30:30 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:00.455 10:30:30 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:00.455 10:30:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:00.455 10:30:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:00.455 10:30:30 -- common/autotest_common.sh@10 -- # set +x 00:03:00.455 ************************************ 00:03:00.455 START TEST setup.sh 00:03:00.455 ************************************ 00:03:00.455 10:30:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:00.455 * Looking for test storage... 00:03:00.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:00.455 10:30:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:00.455 10:30:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:00.455 10:30:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:00.455 10:30:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:00.455 10:30:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:00.455 10:30:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:00.455 10:30:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:00.455 10:30:30 -- scripts/common.sh@335 -- # IFS=.-: 00:03:00.455 10:30:30 -- scripts/common.sh@335 -- # read -ra ver1 00:03:00.455 10:30:30 -- scripts/common.sh@336 -- # IFS=.-: 00:03:00.455 10:30:30 -- scripts/common.sh@336 -- # read -ra ver2 00:03:00.455 10:30:30 -- scripts/common.sh@337 -- # local 'op=<' 00:03:00.455 10:30:30 -- scripts/common.sh@339 -- # ver1_l=2 00:03:00.455 10:30:30 -- scripts/common.sh@340 -- # ver2_l=1 00:03:00.455 10:30:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:00.455 10:30:30 -- scripts/common.sh@343 -- # case "$op" in 00:03:00.455 10:30:30 -- scripts/common.sh@344 -- # : 1 00:03:00.455 10:30:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:00.455 10:30:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.455 10:30:30 -- scripts/common.sh@364 -- # decimal 1 00:03:00.455 10:30:30 -- scripts/common.sh@352 -- # local d=1 00:03:00.455 10:30:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:00.455 10:30:30 -- scripts/common.sh@354 -- # echo 1 00:03:00.455 10:30:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:00.455 10:30:30 -- scripts/common.sh@365 -- # decimal 2 00:03:00.455 10:30:30 -- scripts/common.sh@352 -- # local d=2 00:03:00.455 10:30:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:00.455 10:30:30 -- scripts/common.sh@354 -- # echo 2 00:03:00.455 10:30:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:00.455 10:30:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:00.455 10:30:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:00.455 10:30:30 -- scripts/common.sh@367 -- # return 0 00:03:00.455 10:30:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:00.455 10:30:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:00.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.455 --rc genhtml_branch_coverage=1 00:03:00.455 --rc genhtml_function_coverage=1 00:03:00.455 --rc genhtml_legend=1 00:03:00.455 --rc geninfo_all_blocks=1 00:03:00.455 --rc geninfo_unexecuted_blocks=1 00:03:00.455 00:03:00.455 ' 00:03:00.455 10:30:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:00.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.455 --rc genhtml_branch_coverage=1 00:03:00.455 --rc genhtml_function_coverage=1 00:03:00.455 --rc genhtml_legend=1 00:03:00.455 --rc geninfo_all_blocks=1 00:03:00.455 --rc geninfo_unexecuted_blocks=1 00:03:00.455 00:03:00.455 ' 00:03:00.455 10:30:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:00.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.455 --rc genhtml_branch_coverage=1 00:03:00.455 --rc genhtml_function_coverage=1 00:03:00.455 --rc genhtml_legend=1 00:03:00.455 --rc geninfo_all_blocks=1 00:03:00.455 --rc geninfo_unexecuted_blocks=1 00:03:00.455 00:03:00.455 ' 00:03:00.455 10:30:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:00.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.455 --rc genhtml_branch_coverage=1 00:03:00.455 --rc genhtml_function_coverage=1 00:03:00.455 --rc genhtml_legend=1 00:03:00.455 --rc geninfo_all_blocks=1 00:03:00.455 --rc geninfo_unexecuted_blocks=1 00:03:00.455 00:03:00.455 ' 00:03:00.455 10:30:30 -- setup/test-setup.sh@10 -- # uname -s 00:03:00.455 10:30:30 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:00.455 10:30:30 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:00.455 10:30:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:00.455 10:30:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:00.455 10:30:30 -- common/autotest_common.sh@10 -- # set +x 00:03:00.455 ************************************ 00:03:00.455 START TEST acl 00:03:00.455 ************************************ 00:03:00.455 10:30:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:00.455 * Looking for test storage... 00:03:00.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:00.455 10:30:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:00.455 10:30:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:00.455 10:30:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:00.455 10:30:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:00.455 10:30:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:00.455 10:30:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:00.455 10:30:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:00.455 10:30:31 -- scripts/common.sh@335 -- # IFS=.-: 00:03:00.455 10:30:31 -- scripts/common.sh@335 -- # read -ra ver1 00:03:00.455 10:30:31 -- scripts/common.sh@336 -- # IFS=.-: 00:03:00.455 10:30:31 -- scripts/common.sh@336 -- # read -ra ver2 00:03:00.455 10:30:31 -- scripts/common.sh@337 -- # local 'op=<' 00:03:00.455 10:30:31 -- scripts/common.sh@339 -- # ver1_l=2 00:03:00.455 10:30:31 -- scripts/common.sh@340 -- # ver2_l=1 00:03:00.455 10:30:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:00.455 10:30:31 -- scripts/common.sh@343 -- # case "$op" in 00:03:00.455 10:30:31 -- scripts/common.sh@344 -- # : 1 00:03:00.455 10:30:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:00.455 10:30:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.455 10:30:31 -- scripts/common.sh@364 -- # decimal 1 00:03:00.455 10:30:31 -- scripts/common.sh@352 -- # local d=1 00:03:00.455 10:30:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:00.455 10:30:31 -- scripts/common.sh@354 -- # echo 1 00:03:00.455 10:30:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:00.455 10:30:31 -- scripts/common.sh@365 -- # decimal 2 00:03:00.455 10:30:31 -- scripts/common.sh@352 -- # local d=2 00:03:00.455 10:30:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:00.455 10:30:31 -- scripts/common.sh@354 -- # echo 2 00:03:00.455 10:30:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:00.455 10:30:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:00.455 10:30:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:00.455 10:30:31 -- scripts/common.sh@367 -- # return 0 00:03:00.455 10:30:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:00.455 10:30:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:00.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.455 --rc genhtml_branch_coverage=1 00:03:00.455 --rc genhtml_function_coverage=1 00:03:00.455 --rc genhtml_legend=1 00:03:00.455 --rc geninfo_all_blocks=1 00:03:00.455 --rc geninfo_unexecuted_blocks=1 00:03:00.455 00:03:00.455 ' 00:03:00.455 10:30:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:00.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.455 --rc genhtml_branch_coverage=1 00:03:00.455 --rc genhtml_function_coverage=1 00:03:00.455 --rc genhtml_legend=1 00:03:00.455 --rc geninfo_all_blocks=1 00:03:00.455 --rc geninfo_unexecuted_blocks=1 00:03:00.455 00:03:00.455 ' 00:03:00.455 10:30:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:00.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.455 --rc genhtml_branch_coverage=1 00:03:00.455 --rc genhtml_function_coverage=1 00:03:00.455 --rc genhtml_legend=1 00:03:00.455 --rc geninfo_all_blocks=1 00:03:00.455 --rc geninfo_unexecuted_blocks=1 00:03:00.455 00:03:00.455 ' 00:03:00.455 10:30:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:00.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.455 --rc genhtml_branch_coverage=1 00:03:00.455 --rc genhtml_function_coverage=1 00:03:00.455 --rc genhtml_legend=1 00:03:00.455 --rc geninfo_all_blocks=1 00:03:00.455 --rc geninfo_unexecuted_blocks=1 00:03:00.455 00:03:00.455 ' 00:03:00.456 10:30:31 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:00.456 10:30:31 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:00.456 10:30:31 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:00.456 10:30:31 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:00.456 10:30:31 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:00.456 10:30:31 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:00.456 10:30:31 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:00.456 10:30:31 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:00.456 10:30:31 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:00.456 10:30:31 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:00.456 10:30:31 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:00.456 10:30:31 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:00.456 10:30:31 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:00.456 10:30:31 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:00.456 10:30:31 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:00.456 10:30:31 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:00.456 10:30:31 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:00.456 10:30:31 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:00.456 10:30:31 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:00.456 10:30:31 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:00.456 10:30:31 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:00.456 10:30:31 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:00.456 10:30:31 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:00.456 10:30:31 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:00.456 10:30:31 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:00.456 10:30:31 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2c2n1 00:03:00.456 10:30:31 -- common/autotest_common.sh@1657 -- # local device=nvme2c2n1 00:03:00.456 10:30:31 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:03:00.456 10:30:31 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:00.456 10:30:31 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:00.456 10:30:31 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:00.456 10:30:31 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:00.456 10:30:31 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:00.456 10:30:31 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:00.456 10:30:31 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:00.456 10:30:31 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:00.456 10:30:31 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:00.456 10:30:31 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:00.456 10:30:31 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:00.456 10:30:31 -- setup/acl.sh@12 -- # devs=() 00:03:00.456 10:30:31 -- setup/acl.sh@12 -- # declare -a devs 00:03:00.456 10:30:31 -- setup/acl.sh@13 -- # drivers=() 00:03:00.456 10:30:31 -- setup/acl.sh@13 -- # declare -A drivers 00:03:00.456 10:30:31 -- setup/acl.sh@51 -- # setup reset 00:03:00.456 10:30:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.456 10:30:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:01.867 10:30:32 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:01.867 10:30:32 -- setup/acl.sh@16 -- # local dev driver 00:03:01.867 10:30:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.867 10:30:32 -- setup/acl.sh@15 -- # setup output status 00:03:01.867 10:30:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.867 10:30:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:01.867 Hugepages 00:03:01.867 node hugesize free / total 00:03:01.867 10:30:32 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:01.867 10:30:32 -- setup/acl.sh@19 -- # continue 00:03:01.867 10:30:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.867 00:03:01.867 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:01.867 10:30:32 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:01.867 10:30:32 -- setup/acl.sh@19 -- # continue 00:03:01.867 10:30:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.867 10:30:32 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:01.867 10:30:32 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:01.867 10:30:32 -- setup/acl.sh@20 -- # continue 00:03:01.867 10:30:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.867 10:30:32 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:01.867 10:30:32 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:01.867 10:30:32 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:01.867 10:30:32 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:01.867 10:30:32 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:01.867 10:30:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.129 10:30:32 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:02.129 10:30:32 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:02.129 10:30:32 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:02.129 10:30:32 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:02.129 10:30:32 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:02.129 10:30:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.129 10:30:32 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:03:02.129 10:30:32 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:02.129 10:30:32 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:02.129 10:30:32 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:02.129 10:30:32 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:02.129 10:30:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.129 10:30:32 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:03:02.129 10:30:32 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:02.129 10:30:32 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:02.129 10:30:32 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:02.129 10:30:32 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:02.129 10:30:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.129 10:30:32 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:02.129 10:30:32 -- setup/acl.sh@54 -- # run_test denied denied 00:03:02.129 10:30:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:02.129 10:30:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:02.129 10:30:32 -- common/autotest_common.sh@10 -- # set +x 00:03:02.129 ************************************ 00:03:02.129 START TEST denied 00:03:02.129 ************************************ 00:03:02.129 10:30:32 -- common/autotest_common.sh@1114 -- # denied 00:03:02.130 10:30:32 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:02.130 10:30:32 -- setup/acl.sh@38 -- # setup output config 00:03:02.130 10:30:32 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:02.130 10:30:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.130 10:30:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:03.521 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:03.521 10:30:33 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:03.521 10:30:33 -- setup/acl.sh@28 -- # local dev driver 00:03:03.521 10:30:33 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:03.521 10:30:33 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:03.521 10:30:33 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:03.521 10:30:33 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:03.521 10:30:33 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:03.521 10:30:33 -- setup/acl.sh@41 -- # setup reset 00:03:03.521 10:30:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:03.521 10:30:33 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:10.107 00:03:10.107 real 0m7.073s 00:03:10.107 user 0m0.733s 00:03:10.107 sys 0m1.175s 00:03:10.107 10:30:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:10.107 10:30:39 -- common/autotest_common.sh@10 -- # set +x 00:03:10.107 ************************************ 00:03:10.107 END TEST denied 00:03:10.107 ************************************ 00:03:10.107 10:30:39 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:10.107 10:30:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:10.107 10:30:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:10.107 10:30:39 -- common/autotest_common.sh@10 -- # set +x 00:03:10.107 ************************************ 00:03:10.107 START TEST allowed 00:03:10.107 ************************************ 00:03:10.107 10:30:39 -- common/autotest_common.sh@1114 -- # allowed 00:03:10.107 10:30:39 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:10.107 10:30:39 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:10.107 10:30:39 -- setup/acl.sh@45 -- # setup output config 00:03:10.107 10:30:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.107 10:30:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:10.368 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:10.368 10:30:40 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:03:10.368 10:30:40 -- setup/acl.sh@28 -- # local dev driver 00:03:10.368 10:30:40 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:10.368 10:30:40 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:10.368 10:30:40 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:10.368 10:30:40 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:10.368 10:30:40 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:10.368 10:30:40 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:10.368 10:30:40 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:03:10.368 10:30:40 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:03:10.368 10:30:40 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:10.368 10:30:40 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:10.368 10:30:40 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:10.368 10:30:40 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:03:10.368 10:30:40 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:03:10.368 10:30:40 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:10.368 10:30:40 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:10.368 10:30:40 -- setup/acl.sh@48 -- # setup reset 00:03:10.368 10:30:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.368 10:30:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:11.809 ************************************ 00:03:11.809 END TEST allowed 00:03:11.809 ************************************ 00:03:11.809 00:03:11.809 real 0m2.186s 00:03:11.809 user 0m0.841s 00:03:11.809 sys 0m1.093s 00:03:11.809 10:30:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:11.809 10:30:41 -- common/autotest_common.sh@10 -- # set +x 00:03:11.809 ************************************ 00:03:11.809 END TEST acl 00:03:11.809 ************************************ 00:03:11.809 00:03:11.809 real 0m11.106s 00:03:11.809 user 0m2.298s 00:03:11.809 sys 0m3.220s 00:03:11.809 10:30:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:11.809 10:30:42 -- common/autotest_common.sh@10 -- # set +x 00:03:11.809 10:30:42 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:11.809 10:30:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:11.809 10:30:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:11.809 10:30:42 -- common/autotest_common.sh@10 -- # set +x 00:03:11.809 ************************************ 00:03:11.809 START TEST hugepages 00:03:11.809 ************************************ 00:03:11.809 10:30:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:11.809 * Looking for test storage... 00:03:11.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:11.809 10:30:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:11.809 10:30:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:11.809 10:30:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:11.809 10:30:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:11.809 10:30:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:11.809 10:30:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:11.809 10:30:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:11.809 10:30:42 -- scripts/common.sh@335 -- # IFS=.-: 00:03:11.809 10:30:42 -- scripts/common.sh@335 -- # read -ra ver1 00:03:11.809 10:30:42 -- scripts/common.sh@336 -- # IFS=.-: 00:03:11.809 10:30:42 -- scripts/common.sh@336 -- # read -ra ver2 00:03:11.809 10:30:42 -- scripts/common.sh@337 -- # local 'op=<' 00:03:11.809 10:30:42 -- scripts/common.sh@339 -- # ver1_l=2 00:03:11.809 10:30:42 -- scripts/common.sh@340 -- # ver2_l=1 00:03:11.809 10:30:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:11.809 10:30:42 -- scripts/common.sh@343 -- # case "$op" in 00:03:11.809 10:30:42 -- scripts/common.sh@344 -- # : 1 00:03:11.809 10:30:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:11.809 10:30:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.809 10:30:42 -- scripts/common.sh@364 -- # decimal 1 00:03:11.809 10:30:42 -- scripts/common.sh@352 -- # local d=1 00:03:11.809 10:30:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:11.809 10:30:42 -- scripts/common.sh@354 -- # echo 1 00:03:11.809 10:30:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:11.809 10:30:42 -- scripts/common.sh@365 -- # decimal 2 00:03:11.809 10:30:42 -- scripts/common.sh@352 -- # local d=2 00:03:11.809 10:30:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:11.809 10:30:42 -- scripts/common.sh@354 -- # echo 2 00:03:11.809 10:30:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:11.809 10:30:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:11.809 10:30:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:11.809 10:30:42 -- scripts/common.sh@367 -- # return 0 00:03:11.809 10:30:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:11.809 10:30:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:11.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.809 --rc genhtml_branch_coverage=1 00:03:11.809 --rc genhtml_function_coverage=1 00:03:11.809 --rc genhtml_legend=1 00:03:11.809 --rc geninfo_all_blocks=1 00:03:11.809 --rc geninfo_unexecuted_blocks=1 00:03:11.809 00:03:11.809 ' 00:03:11.809 10:30:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:11.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.809 --rc genhtml_branch_coverage=1 00:03:11.809 --rc genhtml_function_coverage=1 00:03:11.809 --rc genhtml_legend=1 00:03:11.809 --rc geninfo_all_blocks=1 00:03:11.809 --rc geninfo_unexecuted_blocks=1 00:03:11.809 00:03:11.809 ' 00:03:11.809 10:30:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:11.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.809 --rc genhtml_branch_coverage=1 00:03:11.809 --rc genhtml_function_coverage=1 00:03:11.809 --rc genhtml_legend=1 00:03:11.809 --rc geninfo_all_blocks=1 00:03:11.809 --rc geninfo_unexecuted_blocks=1 00:03:11.809 00:03:11.809 ' 00:03:11.809 10:30:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:11.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.809 --rc genhtml_branch_coverage=1 00:03:11.809 --rc genhtml_function_coverage=1 00:03:11.809 --rc genhtml_legend=1 00:03:11.809 --rc geninfo_all_blocks=1 00:03:11.809 --rc geninfo_unexecuted_blocks=1 00:03:11.809 00:03:11.809 ' 00:03:11.809 10:30:42 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:11.809 10:30:42 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:11.809 10:30:42 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:11.809 10:30:42 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:11.809 10:30:42 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:11.809 10:30:42 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:11.809 10:30:42 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:11.809 10:30:42 -- setup/common.sh@18 -- # local node= 00:03:11.809 10:30:42 -- setup/common.sh@19 -- # local var val 00:03:11.809 10:30:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.809 10:30:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.809 10:30:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.809 10:30:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.809 10:30:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.809 10:30:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.809 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.809 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.809 10:30:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 5801036 kB' 'MemAvailable: 7357172 kB' 'Buffers: 2684 kB' 'Cached: 1769100 kB' 'SwapCached: 0 kB' 'Active: 465368 kB' 'Inactive: 1422044 kB' 'Active(anon): 126160 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 117276 kB' 'Mapped: 50948 kB' 'Shmem: 10532 kB' 'KReclaimable: 63624 kB' 'Slab: 161648 kB' 'SReclaimable: 63624 kB' 'SUnreclaim: 98024 kB' 'KernelStack: 6640 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12410000 kB' 'Committed_AS: 310152 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:11.809 10:30:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.809 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.809 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.809 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.809 10:30:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.809 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.809 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.809 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.809 10:30:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.809 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.809 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.809 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.809 10:30:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.809 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.809 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.809 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.809 10:30:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.809 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.810 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.810 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # continue 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.811 10:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.811 10:30:42 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.811 10:30:42 -- setup/common.sh@33 -- # echo 2048 00:03:11.811 10:30:42 -- setup/common.sh@33 -- # return 0 00:03:11.811 10:30:42 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:11.811 10:30:42 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:11.811 10:30:42 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:11.811 10:30:42 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:11.811 10:30:42 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:11.811 10:30:42 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:11.811 10:30:42 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:11.811 10:30:42 -- setup/hugepages.sh@207 -- # get_nodes 00:03:11.811 10:30:42 -- setup/hugepages.sh@27 -- # local node 00:03:11.811 10:30:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.811 10:30:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:11.811 10:30:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:11.811 10:30:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.811 10:30:42 -- setup/hugepages.sh@208 -- # clear_hp 00:03:11.811 10:30:42 -- setup/hugepages.sh@37 -- # local node hp 00:03:11.811 10:30:42 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.811 10:30:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.811 10:30:42 -- setup/hugepages.sh@41 -- # echo 0 00:03:11.811 10:30:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.811 10:30:42 -- setup/hugepages.sh@41 -- # echo 0 00:03:11.811 10:30:42 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:11.811 10:30:42 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:11.811 10:30:42 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:11.811 10:30:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:11.811 10:30:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:11.811 10:30:42 -- common/autotest_common.sh@10 -- # set +x 00:03:11.811 ************************************ 00:03:11.811 START TEST default_setup 00:03:11.811 ************************************ 00:03:11.811 10:30:42 -- common/autotest_common.sh@1114 -- # default_setup 00:03:11.811 10:30:42 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:11.811 10:30:42 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:11.811 10:30:42 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:11.811 10:30:42 -- setup/hugepages.sh@51 -- # shift 00:03:11.811 10:30:42 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:11.811 10:30:42 -- setup/hugepages.sh@52 -- # local node_ids 00:03:11.811 10:30:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.811 10:30:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:11.811 10:30:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:11.811 10:30:42 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:11.811 10:30:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.811 10:30:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.811 10:30:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:11.811 10:30:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.811 10:30:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.811 10:30:42 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:11.811 10:30:42 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:11.811 10:30:42 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:11.811 10:30:42 -- setup/hugepages.sh@73 -- # return 0 00:03:11.811 10:30:42 -- setup/hugepages.sh@137 -- # setup output 00:03:11.811 10:30:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.811 10:30:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:12.756 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:12.756 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:12.756 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:13.020 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:03:13.020 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:03:13.020 10:30:43 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:13.020 10:30:43 -- setup/hugepages.sh@89 -- # local node 00:03:13.020 10:30:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.020 10:30:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.020 10:30:43 -- setup/hugepages.sh@92 -- # local surp 00:03:13.020 10:30:43 -- setup/hugepages.sh@93 -- # local resv 00:03:13.020 10:30:43 -- setup/hugepages.sh@94 -- # local anon 00:03:13.020 10:30:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.020 10:30:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.020 10:30:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.020 10:30:43 -- setup/common.sh@18 -- # local node= 00:03:13.020 10:30:43 -- setup/common.sh@19 -- # local var val 00:03:13.020 10:30:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.020 10:30:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.020 10:30:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.020 10:30:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.020 10:30:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.020 10:30:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.020 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.020 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.020 10:30:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7919732 kB' 'MemAvailable: 9475684 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467940 kB' 'Inactive: 1422060 kB' 'Active(anon): 128732 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119572 kB' 'Mapped: 50796 kB' 'Shmem: 10496 kB' 'KReclaimable: 63220 kB' 'Slab: 161344 kB' 'SReclaimable: 63220 kB' 'SUnreclaim: 98124 kB' 'KernelStack: 6656 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55736 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:13.020 10:30:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.020 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.020 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.020 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.021 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.021 10:30:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.021 10:30:43 -- setup/common.sh@33 -- # echo 0 00:03:13.021 10:30:43 -- setup/common.sh@33 -- # return 0 00:03:13.021 10:30:43 -- setup/hugepages.sh@97 -- # anon=0 00:03:13.021 10:30:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.021 10:30:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.021 10:30:43 -- setup/common.sh@18 -- # local node= 00:03:13.022 10:30:43 -- setup/common.sh@19 -- # local var val 00:03:13.022 10:30:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.022 10:30:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.022 10:30:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.022 10:30:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.022 10:30:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.022 10:30:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7919480 kB' 'MemAvailable: 9475436 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467672 kB' 'Inactive: 1422064 kB' 'Active(anon): 128464 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119540 kB' 'Mapped: 50796 kB' 'Shmem: 10496 kB' 'KReclaimable: 63220 kB' 'Slab: 161336 kB' 'SReclaimable: 63220 kB' 'SUnreclaim: 98116 kB' 'KernelStack: 6624 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.022 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.022 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.023 10:30:43 -- setup/common.sh@33 -- # echo 0 00:03:13.023 10:30:43 -- setup/common.sh@33 -- # return 0 00:03:13.023 10:30:43 -- setup/hugepages.sh@99 -- # surp=0 00:03:13.023 10:30:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.023 10:30:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.023 10:30:43 -- setup/common.sh@18 -- # local node= 00:03:13.023 10:30:43 -- setup/common.sh@19 -- # local var val 00:03:13.023 10:30:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.023 10:30:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.023 10:30:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.023 10:30:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.023 10:30:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.023 10:30:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7926536 kB' 'MemAvailable: 9482492 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467628 kB' 'Inactive: 1422064 kB' 'Active(anon): 128420 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119500 kB' 'Mapped: 50796 kB' 'Shmem: 10496 kB' 'KReclaimable: 63220 kB' 'Slab: 161332 kB' 'SReclaimable: 63220 kB' 'SUnreclaim: 98112 kB' 'KernelStack: 6640 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55720 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.023 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.023 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.024 10:30:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.024 10:30:43 -- setup/common.sh@33 -- # echo 0 00:03:13.024 10:30:43 -- setup/common.sh@33 -- # return 0 00:03:13.024 10:30:43 -- setup/hugepages.sh@100 -- # resv=0 00:03:13.024 nr_hugepages=1024 00:03:13.024 10:30:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:13.024 resv_hugepages=0 00:03:13.024 surplus_hugepages=0 00:03:13.024 10:30:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.024 10:30:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.024 anon_hugepages=0 00:03:13.024 10:30:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.024 10:30:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.024 10:30:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:13.024 10:30:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.024 10:30:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.024 10:30:43 -- setup/common.sh@18 -- # local node= 00:03:13.024 10:30:43 -- setup/common.sh@19 -- # local var val 00:03:13.024 10:30:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.024 10:30:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.024 10:30:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.024 10:30:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.024 10:30:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.024 10:30:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.024 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7926536 kB' 'MemAvailable: 9482492 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467460 kB' 'Inactive: 1422064 kB' 'Active(anon): 128252 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119328 kB' 'Mapped: 50796 kB' 'Shmem: 10496 kB' 'KReclaimable: 63220 kB' 'Slab: 161332 kB' 'SReclaimable: 63220 kB' 'SUnreclaim: 98112 kB' 'KernelStack: 6608 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55720 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.025 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.025 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.026 10:30:43 -- setup/common.sh@33 -- # echo 1024 00:03:13.026 10:30:43 -- setup/common.sh@33 -- # return 0 00:03:13.026 10:30:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.026 10:30:43 -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.026 10:30:43 -- setup/hugepages.sh@27 -- # local node 00:03:13.026 10:30:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.026 10:30:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:13.026 10:30:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:13.026 10:30:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.026 10:30:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.026 10:30:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.026 10:30:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.026 10:30:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.026 10:30:43 -- setup/common.sh@18 -- # local node=0 00:03:13.026 10:30:43 -- setup/common.sh@19 -- # local var val 00:03:13.026 10:30:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.026 10:30:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.026 10:30:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.026 10:30:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.026 10:30:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.026 10:30:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7926536 kB' 'MemUsed: 4310560 kB' 'SwapCached: 0 kB' 'Active: 467720 kB' 'Inactive: 1422064 kB' 'Active(anon): 128512 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'FilePages: 1771772 kB' 'Mapped: 50796 kB' 'AnonPages: 119588 kB' 'Shmem: 10496 kB' 'KernelStack: 6676 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63220 kB' 'Slab: 161332 kB' 'SReclaimable: 63220 kB' 'SUnreclaim: 98112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.026 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.026 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # continue 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.027 10:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.027 10:30:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.027 10:30:43 -- setup/common.sh@33 -- # echo 0 00:03:13.027 10:30:43 -- setup/common.sh@33 -- # return 0 00:03:13.027 10:30:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.027 10:30:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.027 10:30:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.027 node0=1024 expecting 1024 00:03:13.027 10:30:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.027 10:30:43 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:13.027 10:30:43 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:13.027 00:03:13.027 real 0m1.255s 00:03:13.027 user 0m0.484s 00:03:13.027 sys 0m0.636s 00:03:13.027 10:30:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:13.027 10:30:43 -- common/autotest_common.sh@10 -- # set +x 00:03:13.027 ************************************ 00:03:13.027 END TEST default_setup 00:03:13.027 ************************************ 00:03:13.027 10:30:43 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:13.027 10:30:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:13.027 10:30:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:13.027 10:30:43 -- common/autotest_common.sh@10 -- # set +x 00:03:13.027 ************************************ 00:03:13.027 START TEST per_node_1G_alloc 00:03:13.027 ************************************ 00:03:13.027 10:30:43 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:13.027 10:30:43 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:13.027 10:30:43 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:13.027 10:30:43 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:13.027 10:30:43 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:13.027 10:30:43 -- setup/hugepages.sh@51 -- # shift 00:03:13.027 10:30:43 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:13.027 10:30:43 -- setup/hugepages.sh@52 -- # local node_ids 00:03:13.027 10:30:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.027 10:30:43 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:13.027 10:30:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:13.027 10:30:43 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:13.027 10:30:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.027 10:30:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:13.027 10:30:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:13.027 10:30:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.027 10:30:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.027 10:30:43 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:13.027 10:30:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:13.027 10:30:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:13.027 10:30:43 -- setup/hugepages.sh@73 -- # return 0 00:03:13.027 10:30:43 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:13.027 10:30:43 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:13.027 10:30:43 -- setup/hugepages.sh@146 -- # setup output 00:03:13.027 10:30:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.027 10:30:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:13.602 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:13.602 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:13.602 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:13.602 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:13.602 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:13.602 10:30:44 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:13.602 10:30:44 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:13.602 10:30:44 -- setup/hugepages.sh@89 -- # local node 00:03:13.602 10:30:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.602 10:30:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.602 10:30:44 -- setup/hugepages.sh@92 -- # local surp 00:03:13.602 10:30:44 -- setup/hugepages.sh@93 -- # local resv 00:03:13.602 10:30:44 -- setup/hugepages.sh@94 -- # local anon 00:03:13.602 10:30:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.602 10:30:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.602 10:30:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.602 10:30:44 -- setup/common.sh@18 -- # local node= 00:03:13.602 10:30:44 -- setup/common.sh@19 -- # local var val 00:03:13.602 10:30:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.602 10:30:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.602 10:30:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.602 10:30:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.602 10:30:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.603 10:30:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8980088 kB' 'MemAvailable: 10536048 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 468104 kB' 'Inactive: 1422072 kB' 'Active(anon): 128896 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119780 kB' 'Mapped: 51052 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161284 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98068 kB' 'KernelStack: 6664 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55736 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.603 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.603 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.604 10:30:44 -- setup/common.sh@33 -- # echo 0 00:03:13.604 10:30:44 -- setup/common.sh@33 -- # return 0 00:03:13.604 10:30:44 -- setup/hugepages.sh@97 -- # anon=0 00:03:13.604 10:30:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.604 10:30:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.604 10:30:44 -- setup/common.sh@18 -- # local node= 00:03:13.604 10:30:44 -- setup/common.sh@19 -- # local var val 00:03:13.604 10:30:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.604 10:30:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.604 10:30:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.604 10:30:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.604 10:30:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.604 10:30:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8980088 kB' 'MemAvailable: 10536048 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467612 kB' 'Inactive: 1422072 kB' 'Active(anon): 128404 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119476 kB' 'Mapped: 50900 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161280 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98064 kB' 'KernelStack: 6612 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55720 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.604 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.604 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.605 10:30:44 -- setup/common.sh@33 -- # echo 0 00:03:13.605 10:30:44 -- setup/common.sh@33 -- # return 0 00:03:13.605 10:30:44 -- setup/hugepages.sh@99 -- # surp=0 00:03:13.605 10:30:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.605 10:30:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.605 10:30:44 -- setup/common.sh@18 -- # local node= 00:03:13.605 10:30:44 -- setup/common.sh@19 -- # local var val 00:03:13.605 10:30:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.605 10:30:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.605 10:30:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.605 10:30:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.605 10:30:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.605 10:30:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.605 10:30:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8980088 kB' 'MemAvailable: 10536048 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467440 kB' 'Inactive: 1422072 kB' 'Active(anon): 128232 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119332 kB' 'Mapped: 50720 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161316 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98100 kB' 'KernelStack: 6608 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.605 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.605 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.606 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.606 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.607 10:30:44 -- setup/common.sh@33 -- # echo 0 00:03:13.607 10:30:44 -- setup/common.sh@33 -- # return 0 00:03:13.607 nr_hugepages=512 00:03:13.607 10:30:44 -- setup/hugepages.sh@100 -- # resv=0 00:03:13.607 10:30:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:13.607 resv_hugepages=0 00:03:13.607 surplus_hugepages=0 00:03:13.607 10:30:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.607 10:30:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.607 anon_hugepages=0 00:03:13.607 10:30:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.607 10:30:44 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:13.607 10:30:44 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:13.607 10:30:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.607 10:30:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.607 10:30:44 -- setup/common.sh@18 -- # local node= 00:03:13.607 10:30:44 -- setup/common.sh@19 -- # local var val 00:03:13.607 10:30:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.607 10:30:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.607 10:30:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.607 10:30:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.607 10:30:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.607 10:30:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8980088 kB' 'MemAvailable: 10536048 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467460 kB' 'Inactive: 1422072 kB' 'Active(anon): 128252 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119300 kB' 'Mapped: 50720 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161316 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98100 kB' 'KernelStack: 6592 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.607 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.607 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.608 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.608 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.608 10:30:44 -- setup/common.sh@33 -- # echo 512 00:03:13.608 10:30:44 -- setup/common.sh@33 -- # return 0 00:03:13.608 10:30:44 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:13.608 10:30:44 -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.608 10:30:44 -- setup/hugepages.sh@27 -- # local node 00:03:13.608 10:30:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.608 10:30:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.608 10:30:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:13.608 10:30:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.608 10:30:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.608 10:30:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.608 10:30:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.608 10:30:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.608 10:30:44 -- setup/common.sh@18 -- # local node=0 00:03:13.608 10:30:44 -- setup/common.sh@19 -- # local var val 00:03:13.608 10:30:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.608 10:30:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.609 10:30:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.609 10:30:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.609 10:30:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.609 10:30:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8980088 kB' 'MemUsed: 3257008 kB' 'SwapCached: 0 kB' 'Active: 467432 kB' 'Inactive: 1422072 kB' 'Active(anon): 128224 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'FilePages: 1771772 kB' 'Mapped: 50720 kB' 'AnonPages: 119268 kB' 'Shmem: 10492 kB' 'KernelStack: 6644 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63216 kB' 'Slab: 161308 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.609 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.609 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # continue 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.872 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.872 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.872 10:30:44 -- setup/common.sh@33 -- # echo 0 00:03:13.872 10:30:44 -- setup/common.sh@33 -- # return 0 00:03:13.872 10:30:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.872 10:30:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.872 10:30:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.872 node0=512 expecting 512 00:03:13.872 10:30:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.872 10:30:44 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:13.872 10:30:44 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:13.872 00:03:13.872 real 0m0.595s 00:03:13.872 user 0m0.266s 00:03:13.872 sys 0m0.355s 00:03:13.872 10:30:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:13.872 10:30:44 -- common/autotest_common.sh@10 -- # set +x 00:03:13.872 ************************************ 00:03:13.872 END TEST per_node_1G_alloc 00:03:13.872 ************************************ 00:03:13.872 10:30:44 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:13.872 10:30:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:13.872 10:30:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:13.872 10:30:44 -- common/autotest_common.sh@10 -- # set +x 00:03:13.872 ************************************ 00:03:13.872 START TEST even_2G_alloc 00:03:13.872 ************************************ 00:03:13.872 10:30:44 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:13.872 10:30:44 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:13.872 10:30:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:13.872 10:30:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.872 10:30:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.872 10:30:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:13.872 10:30:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.872 10:30:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.872 10:30:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.872 10:30:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.872 10:30:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:13.872 10:30:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.872 10:30:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.872 10:30:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.872 10:30:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:13.872 10:30:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.872 10:30:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:13.872 10:30:44 -- setup/hugepages.sh@83 -- # : 0 00:03:13.872 10:30:44 -- setup/hugepages.sh@84 -- # : 0 00:03:13.872 10:30:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.872 10:30:44 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:13.872 10:30:44 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:13.872 10:30:44 -- setup/hugepages.sh@153 -- # setup output 00:03:13.872 10:30:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.872 10:30:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:14.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:14.134 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:14.134 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:14.134 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:14.134 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:14.399 10:30:44 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:14.399 10:30:44 -- setup/hugepages.sh@89 -- # local node 00:03:14.399 10:30:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.399 10:30:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.399 10:30:44 -- setup/hugepages.sh@92 -- # local surp 00:03:14.399 10:30:44 -- setup/hugepages.sh@93 -- # local resv 00:03:14.399 10:30:44 -- setup/hugepages.sh@94 -- # local anon 00:03:14.399 10:30:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.399 10:30:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.399 10:30:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.399 10:30:44 -- setup/common.sh@18 -- # local node= 00:03:14.399 10:30:44 -- setup/common.sh@19 -- # local var val 00:03:14.399 10:30:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.399 10:30:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.399 10:30:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.399 10:30:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.399 10:30:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.399 10:30:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7929792 kB' 'MemAvailable: 9485752 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467792 kB' 'Inactive: 1422072 kB' 'Active(anon): 128584 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119688 kB' 'Mapped: 50928 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161424 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98208 kB' 'KernelStack: 6616 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55720 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.399 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.399 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.400 10:30:44 -- setup/common.sh@33 -- # echo 0 00:03:14.400 10:30:44 -- setup/common.sh@33 -- # return 0 00:03:14.400 10:30:44 -- setup/hugepages.sh@97 -- # anon=0 00:03:14.400 10:30:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.400 10:30:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.400 10:30:44 -- setup/common.sh@18 -- # local node= 00:03:14.400 10:30:44 -- setup/common.sh@19 -- # local var val 00:03:14.400 10:30:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.400 10:30:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.400 10:30:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.400 10:30:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.400 10:30:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.400 10:30:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7930312 kB' 'MemAvailable: 9486272 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467708 kB' 'Inactive: 1422072 kB' 'Active(anon): 128500 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119544 kB' 'Mapped: 50876 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161420 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98204 kB' 'KernelStack: 6584 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.400 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.400 10:30:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.401 10:30:44 -- setup/common.sh@33 -- # echo 0 00:03:14.401 10:30:44 -- setup/common.sh@33 -- # return 0 00:03:14.401 10:30:44 -- setup/hugepages.sh@99 -- # surp=0 00:03:14.401 10:30:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.401 10:30:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.401 10:30:44 -- setup/common.sh@18 -- # local node= 00:03:14.401 10:30:44 -- setup/common.sh@19 -- # local var val 00:03:14.401 10:30:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.401 10:30:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.401 10:30:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.401 10:30:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.401 10:30:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.401 10:30:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7930416 kB' 'MemAvailable: 9486376 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467396 kB' 'Inactive: 1422072 kB' 'Active(anon): 128188 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119232 kB' 'Mapped: 50720 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161460 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98244 kB' 'KernelStack: 6592 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.401 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.401 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.402 10:30:44 -- setup/common.sh@33 -- # echo 0 00:03:14.402 10:30:44 -- setup/common.sh@33 -- # return 0 00:03:14.402 nr_hugepages=1024 00:03:14.402 resv_hugepages=0 00:03:14.402 surplus_hugepages=0 00:03:14.402 anon_hugepages=0 00:03:14.402 10:30:44 -- setup/hugepages.sh@100 -- # resv=0 00:03:14.402 10:30:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.402 10:30:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.402 10:30:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.402 10:30:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.402 10:30:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.402 10:30:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.402 10:30:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.402 10:30:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.402 10:30:44 -- setup/common.sh@18 -- # local node= 00:03:14.402 10:30:44 -- setup/common.sh@19 -- # local var val 00:03:14.402 10:30:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.402 10:30:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.402 10:30:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.402 10:30:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.402 10:30:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.402 10:30:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7930416 kB' 'MemAvailable: 9486376 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467668 kB' 'Inactive: 1422072 kB' 'Active(anon): 128460 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119544 kB' 'Mapped: 50720 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161452 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98236 kB' 'KernelStack: 6624 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.402 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.402 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.403 10:30:44 -- setup/common.sh@33 -- # echo 1024 00:03:14.403 10:30:44 -- setup/common.sh@33 -- # return 0 00:03:14.403 10:30:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.403 10:30:44 -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.403 10:30:44 -- setup/hugepages.sh@27 -- # local node 00:03:14.403 10:30:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.403 10:30:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:14.403 10:30:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:14.403 10:30:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.403 10:30:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.403 10:30:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.403 10:30:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.403 10:30:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.403 10:30:44 -- setup/common.sh@18 -- # local node=0 00:03:14.403 10:30:44 -- setup/common.sh@19 -- # local var val 00:03:14.403 10:30:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.403 10:30:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.403 10:30:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.403 10:30:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.403 10:30:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.403 10:30:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7930416 kB' 'MemUsed: 4306680 kB' 'SwapCached: 0 kB' 'Active: 467520 kB' 'Inactive: 1422072 kB' 'Active(anon): 128312 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'FilePages: 1771772 kB' 'Mapped: 50720 kB' 'AnonPages: 119376 kB' 'Shmem: 10492 kB' 'KernelStack: 6624 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63216 kB' 'Slab: 161452 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.403 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.403 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # continue 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.404 10:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.404 10:30:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.404 10:30:44 -- setup/common.sh@33 -- # echo 0 00:03:14.404 10:30:44 -- setup/common.sh@33 -- # return 0 00:03:14.404 10:30:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.404 10:30:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.404 10:30:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.404 10:30:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.404 node0=1024 expecting 1024 00:03:14.404 10:30:44 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:14.404 10:30:44 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:14.404 00:03:14.404 real 0m0.586s 00:03:14.404 user 0m0.252s 00:03:14.404 sys 0m0.351s 00:03:14.404 10:30:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:14.404 ************************************ 00:03:14.404 END TEST even_2G_alloc 00:03:14.404 ************************************ 00:03:14.404 10:30:44 -- common/autotest_common.sh@10 -- # set +x 00:03:14.404 10:30:44 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:14.404 10:30:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:14.404 10:30:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:14.404 10:30:44 -- common/autotest_common.sh@10 -- # set +x 00:03:14.404 ************************************ 00:03:14.404 START TEST odd_alloc 00:03:14.404 ************************************ 00:03:14.404 10:30:44 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:14.404 10:30:44 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:14.404 10:30:44 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:14.404 10:30:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:14.404 10:30:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.404 10:30:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:14.404 10:30:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:14.404 10:30:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:14.404 10:30:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.404 10:30:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:14.404 10:30:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:14.404 10:30:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.404 10:30:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.404 10:30:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:14.404 10:30:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:14.404 10:30:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.404 10:30:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:14.404 10:30:44 -- setup/hugepages.sh@83 -- # : 0 00:03:14.404 10:30:44 -- setup/hugepages.sh@84 -- # : 0 00:03:14.404 10:30:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.404 10:30:44 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:14.404 10:30:44 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:14.404 10:30:44 -- setup/hugepages.sh@160 -- # setup output 00:03:14.404 10:30:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.404 10:30:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:14.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:14.976 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:14.976 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:14.976 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:14.976 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:14.976 10:30:45 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:14.976 10:30:45 -- setup/hugepages.sh@89 -- # local node 00:03:14.976 10:30:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.976 10:30:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.976 10:30:45 -- setup/hugepages.sh@92 -- # local surp 00:03:14.976 10:30:45 -- setup/hugepages.sh@93 -- # local resv 00:03:14.976 10:30:45 -- setup/hugepages.sh@94 -- # local anon 00:03:14.976 10:30:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.976 10:30:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.976 10:30:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.976 10:30:45 -- setup/common.sh@18 -- # local node= 00:03:14.976 10:30:45 -- setup/common.sh@19 -- # local var val 00:03:14.976 10:30:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.976 10:30:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.976 10:30:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.976 10:30:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.976 10:30:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.976 10:30:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7929932 kB' 'MemAvailable: 9485892 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467536 kB' 'Inactive: 1422072 kB' 'Active(anon): 128328 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119372 kB' 'Mapped: 50852 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161540 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98324 kB' 'KernelStack: 6656 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55720 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.976 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.976 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.977 10:30:45 -- setup/common.sh@33 -- # echo 0 00:03:14.977 10:30:45 -- setup/common.sh@33 -- # return 0 00:03:14.977 10:30:45 -- setup/hugepages.sh@97 -- # anon=0 00:03:14.977 10:30:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.977 10:30:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.977 10:30:45 -- setup/common.sh@18 -- # local node= 00:03:14.977 10:30:45 -- setup/common.sh@19 -- # local var val 00:03:14.977 10:30:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.977 10:30:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.977 10:30:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.977 10:30:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.977 10:30:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.977 10:30:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7929932 kB' 'MemAvailable: 9485892 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467540 kB' 'Inactive: 1422072 kB' 'Active(anon): 128332 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119412 kB' 'Mapped: 50720 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161504 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98288 kB' 'KernelStack: 6640 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55720 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.977 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.977 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.978 10:30:45 -- setup/common.sh@33 -- # echo 0 00:03:14.978 10:30:45 -- setup/common.sh@33 -- # return 0 00:03:14.978 10:30:45 -- setup/hugepages.sh@99 -- # surp=0 00:03:14.978 10:30:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.978 10:30:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.978 10:30:45 -- setup/common.sh@18 -- # local node= 00:03:14.978 10:30:45 -- setup/common.sh@19 -- # local var val 00:03:14.978 10:30:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.978 10:30:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.978 10:30:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.978 10:30:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.978 10:30:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.978 10:30:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7929932 kB' 'MemAvailable: 9485892 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467464 kB' 'Inactive: 1422072 kB' 'Active(anon): 128256 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119384 kB' 'Mapped: 50720 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161504 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98288 kB' 'KernelStack: 6624 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.978 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.978 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.979 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.979 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.980 10:30:45 -- setup/common.sh@33 -- # echo 0 00:03:14.980 10:30:45 -- setup/common.sh@33 -- # return 0 00:03:14.980 10:30:45 -- setup/hugepages.sh@100 -- # resv=0 00:03:14.980 nr_hugepages=1025 00:03:14.980 10:30:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:14.980 resv_hugepages=0 00:03:14.980 surplus_hugepages=0 00:03:14.980 10:30:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.980 10:30:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.980 anon_hugepages=0 00:03:14.980 10:30:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.980 10:30:45 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:14.980 10:30:45 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:14.980 10:30:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.980 10:30:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.980 10:30:45 -- setup/common.sh@18 -- # local node= 00:03:14.980 10:30:45 -- setup/common.sh@19 -- # local var val 00:03:14.980 10:30:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.980 10:30:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.980 10:30:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.980 10:30:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.980 10:30:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.980 10:30:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7929932 kB' 'MemAvailable: 9485892 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467476 kB' 'Inactive: 1422072 kB' 'Active(anon): 128268 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 119348 kB' 'Mapped: 50720 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161504 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98288 kB' 'KernelStack: 6608 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.980 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.980 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.981 10:30:45 -- setup/common.sh@33 -- # echo 1025 00:03:14.981 10:30:45 -- setup/common.sh@33 -- # return 0 00:03:14.981 10:30:45 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:14.981 10:30:45 -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.981 10:30:45 -- setup/hugepages.sh@27 -- # local node 00:03:14.981 10:30:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.981 10:30:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:14.981 10:30:45 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:14.981 10:30:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.981 10:30:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.981 10:30:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.981 10:30:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.981 10:30:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.981 10:30:45 -- setup/common.sh@18 -- # local node=0 00:03:14.981 10:30:45 -- setup/common.sh@19 -- # local var val 00:03:14.981 10:30:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.981 10:30:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.981 10:30:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.981 10:30:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.981 10:30:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.981 10:30:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7929932 kB' 'MemUsed: 4307164 kB' 'SwapCached: 0 kB' 'Active: 467392 kB' 'Inactive: 1422072 kB' 'Active(anon): 128184 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'FilePages: 1771772 kB' 'Mapped: 50720 kB' 'AnonPages: 119316 kB' 'Shmem: 10492 kB' 'KernelStack: 6592 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63216 kB' 'Slab: 161504 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.981 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.981 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # continue 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.982 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.982 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.982 10:30:45 -- setup/common.sh@33 -- # echo 0 00:03:14.982 10:30:45 -- setup/common.sh@33 -- # return 0 00:03:14.982 10:30:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.982 10:30:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.982 10:30:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.982 10:30:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.982 node0=1025 expecting 1025 00:03:14.982 10:30:45 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:14.982 10:30:45 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:14.982 00:03:14.982 real 0m0.596s 00:03:14.982 user 0m0.250s 00:03:14.982 sys 0m0.368s 00:03:14.982 ************************************ 00:03:14.982 END TEST odd_alloc 00:03:14.982 ************************************ 00:03:14.982 10:30:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:14.982 10:30:45 -- common/autotest_common.sh@10 -- # set +x 00:03:14.982 10:30:45 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:14.982 10:30:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:14.982 10:30:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.241 10:30:45 -- common/autotest_common.sh@10 -- # set +x 00:03:15.241 ************************************ 00:03:15.241 START TEST custom_alloc 00:03:15.241 ************************************ 00:03:15.241 10:30:45 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:15.241 10:30:45 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:15.241 10:30:45 -- setup/hugepages.sh@169 -- # local node 00:03:15.241 10:30:45 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:15.241 10:30:45 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:15.241 10:30:45 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:15.241 10:30:45 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:15.241 10:30:45 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:15.241 10:30:45 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.242 10:30:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.242 10:30:45 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:15.242 10:30:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.242 10:30:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.242 10:30:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.242 10:30:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:15.242 10:30:45 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:15.242 10:30:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.242 10:30:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.242 10:30:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.242 10:30:45 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:15.242 10:30:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.242 10:30:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.242 10:30:45 -- setup/hugepages.sh@83 -- # : 0 00:03:15.242 10:30:45 -- setup/hugepages.sh@84 -- # : 0 00:03:15.242 10:30:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.242 10:30:45 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:15.242 10:30:45 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:15.242 10:30:45 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:15.242 10:30:45 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:15.242 10:30:45 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:15.242 10:30:45 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:15.242 10:30:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.242 10:30:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.242 10:30:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:15.242 10:30:45 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:15.242 10:30:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.242 10:30:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.242 10:30:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.242 10:30:45 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:15.242 10:30:45 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:15.242 10:30:45 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:15.242 10:30:45 -- setup/hugepages.sh@78 -- # return 0 00:03:15.242 10:30:45 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:15.242 10:30:45 -- setup/hugepages.sh@187 -- # setup output 00:03:15.242 10:30:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.242 10:30:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:15.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:15.502 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:15.502 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:15.502 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:15.502 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:15.502 10:30:46 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:15.502 10:30:46 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:15.502 10:30:46 -- setup/hugepages.sh@89 -- # local node 00:03:15.502 10:30:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.502 10:30:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.502 10:30:46 -- setup/hugepages.sh@92 -- # local surp 00:03:15.502 10:30:46 -- setup/hugepages.sh@93 -- # local resv 00:03:15.502 10:30:46 -- setup/hugepages.sh@94 -- # local anon 00:03:15.502 10:30:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.502 10:30:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.502 10:30:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.502 10:30:46 -- setup/common.sh@18 -- # local node= 00:03:15.502 10:30:46 -- setup/common.sh@19 -- # local var val 00:03:15.502 10:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.502 10:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.502 10:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.502 10:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.502 10:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.502 10:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.502 10:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8986384 kB' 'MemAvailable: 10542344 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467792 kB' 'Inactive: 1422072 kB' 'Active(anon): 128584 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119944 kB' 'Mapped: 50832 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161048 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 97832 kB' 'KernelStack: 6684 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55720 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.502 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.502 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.503 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.503 10:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.503 10:30:46 -- setup/common.sh@33 -- # echo 0 00:03:15.503 10:30:46 -- setup/common.sh@33 -- # return 0 00:03:15.503 10:30:46 -- setup/hugepages.sh@97 -- # anon=0 00:03:15.503 10:30:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.503 10:30:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.503 10:30:46 -- setup/common.sh@18 -- # local node= 00:03:15.503 10:30:46 -- setup/common.sh@19 -- # local var val 00:03:15.503 10:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.503 10:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.503 10:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.503 10:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.503 10:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.503 10:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8986132 kB' 'MemAvailable: 10542092 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467436 kB' 'Inactive: 1422072 kB' 'Active(anon): 128228 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119320 kB' 'Mapped: 50848 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161100 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 97884 kB' 'KernelStack: 6640 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55720 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.792 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.792 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.793 10:30:46 -- setup/common.sh@33 -- # echo 0 00:03:15.793 10:30:46 -- setup/common.sh@33 -- # return 0 00:03:15.793 10:30:46 -- setup/hugepages.sh@99 -- # surp=0 00:03:15.793 10:30:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.793 10:30:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.793 10:30:46 -- setup/common.sh@18 -- # local node= 00:03:15.793 10:30:46 -- setup/common.sh@19 -- # local var val 00:03:15.793 10:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.793 10:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.793 10:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.793 10:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.793 10:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.793 10:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8986132 kB' 'MemAvailable: 10542092 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467696 kB' 'Inactive: 1422072 kB' 'Active(anon): 128488 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119580 kB' 'Mapped: 50848 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161100 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 97884 kB' 'KernelStack: 6640 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55736 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.793 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.793 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.794 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.794 10:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.795 10:30:46 -- setup/common.sh@33 -- # echo 0 00:03:15.795 10:30:46 -- setup/common.sh@33 -- # return 0 00:03:15.795 10:30:46 -- setup/hugepages.sh@100 -- # resv=0 00:03:15.795 nr_hugepages=512 00:03:15.795 resv_hugepages=0 00:03:15.795 surplus_hugepages=0 00:03:15.795 10:30:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:15.795 10:30:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.795 10:30:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.795 anon_hugepages=0 00:03:15.795 10:30:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.795 10:30:46 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:15.795 10:30:46 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:15.795 10:30:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.795 10:30:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.795 10:30:46 -- setup/common.sh@18 -- # local node= 00:03:15.795 10:30:46 -- setup/common.sh@19 -- # local var val 00:03:15.795 10:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.795 10:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.795 10:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.795 10:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.795 10:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.795 10:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8985892 kB' 'MemAvailable: 10541852 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467252 kB' 'Inactive: 1422072 kB' 'Active(anon): 128044 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119348 kB' 'Mapped: 50720 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161096 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 97880 kB' 'KernelStack: 6624 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 319916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55720 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.795 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.795 10:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.796 10:30:46 -- setup/common.sh@33 -- # echo 512 00:03:15.796 10:30:46 -- setup/common.sh@33 -- # return 0 00:03:15.796 10:30:46 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:15.796 10:30:46 -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.796 10:30:46 -- setup/hugepages.sh@27 -- # local node 00:03:15.796 10:30:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.796 10:30:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.796 10:30:46 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:15.796 10:30:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.796 10:30:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.796 10:30:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.796 10:30:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.796 10:30:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.796 10:30:46 -- setup/common.sh@18 -- # local node=0 00:03:15.796 10:30:46 -- setup/common.sh@19 -- # local var val 00:03:15.796 10:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.796 10:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.796 10:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.796 10:30:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.796 10:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.796 10:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8986376 kB' 'MemUsed: 3250720 kB' 'SwapCached: 0 kB' 'Active: 467492 kB' 'Inactive: 1422072 kB' 'Active(anon): 128284 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771772 kB' 'Mapped: 50720 kB' 'AnonPages: 119360 kB' 'Shmem: 10492 kB' 'KernelStack: 6640 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63216 kB' 'Slab: 161080 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 97864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.796 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.796 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # continue 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.797 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.797 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.797 10:30:46 -- setup/common.sh@33 -- # echo 0 00:03:15.797 10:30:46 -- setup/common.sh@33 -- # return 0 00:03:15.797 10:30:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.797 10:30:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.797 10:30:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.797 10:30:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.797 10:30:46 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:15.797 node0=512 expecting 512 00:03:15.797 10:30:46 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:15.797 00:03:15.797 real 0m0.584s 00:03:15.797 user 0m0.258s 00:03:15.797 sys 0m0.349s 00:03:15.797 10:30:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:15.797 10:30:46 -- common/autotest_common.sh@10 -- # set +x 00:03:15.797 ************************************ 00:03:15.797 END TEST custom_alloc 00:03:15.797 ************************************ 00:03:15.797 10:30:46 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:15.797 10:30:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.797 10:30:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.797 10:30:46 -- common/autotest_common.sh@10 -- # set +x 00:03:15.797 ************************************ 00:03:15.797 START TEST no_shrink_alloc 00:03:15.797 ************************************ 00:03:15.797 10:30:46 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:15.797 10:30:46 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:15.797 10:30:46 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.797 10:30:46 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:15.797 10:30:46 -- setup/hugepages.sh@51 -- # shift 00:03:15.797 10:30:46 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:15.797 10:30:46 -- setup/hugepages.sh@52 -- # local node_ids 00:03:15.797 10:30:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.797 10:30:46 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.797 10:30:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:15.797 10:30:46 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:15.797 10:30:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.797 10:30:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.797 10:30:46 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:15.797 10:30:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.797 10:30:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.797 10:30:46 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:15.797 10:30:46 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.797 10:30:46 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:15.797 10:30:46 -- setup/hugepages.sh@73 -- # return 0 00:03:15.797 10:30:46 -- setup/hugepages.sh@198 -- # setup output 00:03:15.797 10:30:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.797 10:30:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:16.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:16.323 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:16.323 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:16.323 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:16.323 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:16.323 10:30:46 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:16.323 10:30:46 -- setup/hugepages.sh@89 -- # local node 00:03:16.323 10:30:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.323 10:30:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.323 10:30:46 -- setup/hugepages.sh@92 -- # local surp 00:03:16.323 10:30:46 -- setup/hugepages.sh@93 -- # local resv 00:03:16.323 10:30:46 -- setup/hugepages.sh@94 -- # local anon 00:03:16.323 10:30:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.323 10:30:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.323 10:30:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.323 10:30:46 -- setup/common.sh@18 -- # local node= 00:03:16.323 10:30:46 -- setup/common.sh@19 -- # local var val 00:03:16.323 10:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.323 10:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.323 10:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.323 10:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.323 10:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.323 10:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.323 10:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7930568 kB' 'MemAvailable: 9486528 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 468204 kB' 'Inactive: 1422072 kB' 'Active(anon): 128996 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120084 kB' 'Mapped: 50724 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161172 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 97956 kB' 'KernelStack: 6632 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 320116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:16.323 10:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.323 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.323 10:30:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.323 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.323 10:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.323 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.323 10:30:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.323 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.323 10:30:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.323 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.323 10:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.323 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.323 10:30:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.323 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.323 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.324 10:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.324 10:30:46 -- setup/common.sh@33 -- # echo 0 00:03:16.324 10:30:46 -- setup/common.sh@33 -- # return 0 00:03:16.324 10:30:46 -- setup/hugepages.sh@97 -- # anon=0 00:03:16.324 10:30:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.324 10:30:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.324 10:30:46 -- setup/common.sh@18 -- # local node= 00:03:16.324 10:30:46 -- setup/common.sh@19 -- # local var val 00:03:16.324 10:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.324 10:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.324 10:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.324 10:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.324 10:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.324 10:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.324 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7930812 kB' 'MemAvailable: 9486772 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467716 kB' 'Inactive: 1422072 kB' 'Active(anon): 128508 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119576 kB' 'Mapped: 50796 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161300 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98084 kB' 'KernelStack: 6624 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 320116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.325 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.325 10:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.326 10:30:46 -- setup/common.sh@33 -- # echo 0 00:03:16.326 10:30:46 -- setup/common.sh@33 -- # return 0 00:03:16.326 10:30:46 -- setup/hugepages.sh@99 -- # surp=0 00:03:16.326 10:30:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.326 10:30:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.326 10:30:46 -- setup/common.sh@18 -- # local node= 00:03:16.326 10:30:46 -- setup/common.sh@19 -- # local var val 00:03:16.326 10:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.326 10:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.326 10:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.326 10:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.326 10:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.326 10:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7930812 kB' 'MemAvailable: 9486772 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467580 kB' 'Inactive: 1422072 kB' 'Active(anon): 128372 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119464 kB' 'Mapped: 50720 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161296 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98080 kB' 'KernelStack: 6640 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 320116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.326 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.326 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.327 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.327 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.327 10:30:46 -- setup/common.sh@33 -- # echo 0 00:03:16.327 10:30:46 -- setup/common.sh@33 -- # return 0 00:03:16.327 nr_hugepages=1024 00:03:16.327 resv_hugepages=0 00:03:16.327 surplus_hugepages=0 00:03:16.327 10:30:46 -- setup/hugepages.sh@100 -- # resv=0 00:03:16.327 10:30:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.327 10:30:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.327 10:30:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.327 anon_hugepages=0 00:03:16.327 10:30:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.327 10:30:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.327 10:30:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.327 10:30:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.327 10:30:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.328 10:30:46 -- setup/common.sh@18 -- # local node= 00:03:16.328 10:30:46 -- setup/common.sh@19 -- # local var val 00:03:16.328 10:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.328 10:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.328 10:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.328 10:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.328 10:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.328 10:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7930812 kB' 'MemAvailable: 9486772 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 467584 kB' 'Inactive: 1422072 kB' 'Active(anon): 128376 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119460 kB' 'Mapped: 50720 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161288 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98072 kB' 'KernelStack: 6640 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 320116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.328 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.328 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.329 10:30:46 -- setup/common.sh@33 -- # echo 1024 00:03:16.329 10:30:46 -- setup/common.sh@33 -- # return 0 00:03:16.329 10:30:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.329 10:30:46 -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.329 10:30:46 -- setup/hugepages.sh@27 -- # local node 00:03:16.329 10:30:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.329 10:30:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:16.329 10:30:46 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:16.329 10:30:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.329 10:30:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.329 10:30:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.329 10:30:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.329 10:30:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.329 10:30:46 -- setup/common.sh@18 -- # local node=0 00:03:16.329 10:30:46 -- setup/common.sh@19 -- # local var val 00:03:16.329 10:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.329 10:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.329 10:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.329 10:30:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.329 10:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.329 10:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7930812 kB' 'MemUsed: 4306284 kB' 'SwapCached: 0 kB' 'Active: 467536 kB' 'Inactive: 1422072 kB' 'Active(anon): 128328 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771772 kB' 'Mapped: 50720 kB' 'AnonPages: 119428 kB' 'Shmem: 10492 kB' 'KernelStack: 6624 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63216 kB' 'Slab: 161284 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.329 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.329 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # continue 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.330 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.330 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.330 10:30:46 -- setup/common.sh@33 -- # echo 0 00:03:16.330 10:30:46 -- setup/common.sh@33 -- # return 0 00:03:16.330 10:30:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.330 10:30:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.330 10:30:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.330 10:30:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.330 10:30:46 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:16.330 node0=1024 expecting 1024 00:03:16.330 10:30:46 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:16.330 10:30:46 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:16.330 10:30:46 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:16.330 10:30:46 -- setup/hugepages.sh@202 -- # setup output 00:03:16.330 10:30:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.330 10:30:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:16.906 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:16.906 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:16.906 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:16.906 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:16.906 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:16.906 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:16.906 10:30:47 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:16.906 10:30:47 -- setup/hugepages.sh@89 -- # local node 00:03:16.906 10:30:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.906 10:30:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.906 10:30:47 -- setup/hugepages.sh@92 -- # local surp 00:03:16.906 10:30:47 -- setup/hugepages.sh@93 -- # local resv 00:03:16.906 10:30:47 -- setup/hugepages.sh@94 -- # local anon 00:03:16.906 10:30:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.906 10:30:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.906 10:30:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.906 10:30:47 -- setup/common.sh@18 -- # local node= 00:03:16.906 10:30:47 -- setup/common.sh@19 -- # local var val 00:03:16.906 10:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.906 10:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.906 10:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.906 10:30:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.906 10:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.906 10:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.906 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932356 kB' 'MemAvailable: 9488316 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 466032 kB' 'Inactive: 1422072 kB' 'Active(anon): 126824 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117892 kB' 'Mapped: 50000 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161276 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98060 kB' 'KernelStack: 6652 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.907 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.907 10:30:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.908 10:30:47 -- setup/common.sh@33 -- # echo 0 00:03:16.908 10:30:47 -- setup/common.sh@33 -- # return 0 00:03:16.908 10:30:47 -- setup/hugepages.sh@97 -- # anon=0 00:03:16.908 10:30:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.908 10:30:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.908 10:30:47 -- setup/common.sh@18 -- # local node= 00:03:16.908 10:30:47 -- setup/common.sh@19 -- # local var val 00:03:16.908 10:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.908 10:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.908 10:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.908 10:30:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.908 10:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.908 10:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932196 kB' 'MemAvailable: 9488156 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 465512 kB' 'Inactive: 1422072 kB' 'Active(anon): 126304 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117388 kB' 'Mapped: 49872 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161300 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98084 kB' 'KernelStack: 6528 kB' 'PageTables: 3608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.908 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.908 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.909 10:30:47 -- setup/common.sh@33 -- # echo 0 00:03:16.909 10:30:47 -- setup/common.sh@33 -- # return 0 00:03:16.909 10:30:47 -- setup/hugepages.sh@99 -- # surp=0 00:03:16.909 10:30:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.909 10:30:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.909 10:30:47 -- setup/common.sh@18 -- # local node= 00:03:16.909 10:30:47 -- setup/common.sh@19 -- # local var val 00:03:16.909 10:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.909 10:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.909 10:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.909 10:30:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.909 10:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.909 10:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932196 kB' 'MemAvailable: 9488156 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 465356 kB' 'Inactive: 1422072 kB' 'Active(anon): 126148 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117248 kB' 'Mapped: 49872 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161272 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98056 kB' 'KernelStack: 6544 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.909 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.909 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.910 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.910 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.911 10:30:47 -- setup/common.sh@33 -- # echo 0 00:03:16.911 10:30:47 -- setup/common.sh@33 -- # return 0 00:03:16.911 10:30:47 -- setup/hugepages.sh@100 -- # resv=0 00:03:16.911 nr_hugepages=1024 00:03:16.911 10:30:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.911 resv_hugepages=0 00:03:16.911 10:30:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.911 10:30:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.911 surplus_hugepages=0 00:03:16.911 anon_hugepages=0 00:03:16.911 10:30:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.911 10:30:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.911 10:30:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.911 10:30:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.911 10:30:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.911 10:30:47 -- setup/common.sh@18 -- # local node= 00:03:16.911 10:30:47 -- setup/common.sh@19 -- # local var val 00:03:16.911 10:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.911 10:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.911 10:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.911 10:30:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.911 10:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.911 10:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932196 kB' 'MemAvailable: 9488156 kB' 'Buffers: 2684 kB' 'Cached: 1769088 kB' 'SwapCached: 0 kB' 'Active: 465332 kB' 'Inactive: 1422072 kB' 'Active(anon): 126124 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117248 kB' 'Mapped: 49872 kB' 'Shmem: 10492 kB' 'KReclaimable: 63216 kB' 'Slab: 161272 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98056 kB' 'KernelStack: 6544 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.911 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.911 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.912 10:30:47 -- setup/common.sh@33 -- # echo 1024 00:03:16.912 10:30:47 -- setup/common.sh@33 -- # return 0 00:03:16.912 10:30:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.912 10:30:47 -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.912 10:30:47 -- setup/hugepages.sh@27 -- # local node 00:03:16.912 10:30:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.912 10:30:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:16.912 10:30:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:16.912 10:30:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.912 10:30:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.912 10:30:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.912 10:30:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.912 10:30:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.912 10:30:47 -- setup/common.sh@18 -- # local node=0 00:03:16.912 10:30:47 -- setup/common.sh@19 -- # local var val 00:03:16.912 10:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.912 10:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.912 10:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.912 10:30:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.912 10:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.912 10:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7932196 kB' 'MemUsed: 4304900 kB' 'SwapCached: 0 kB' 'Active: 465320 kB' 'Inactive: 1422072 kB' 'Active(anon): 126112 kB' 'Inactive(anon): 0 kB' 'Active(file): 339208 kB' 'Inactive(file): 1422072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771772 kB' 'Mapped: 49872 kB' 'AnonPages: 117216 kB' 'Shmem: 10492 kB' 'KernelStack: 6544 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63216 kB' 'Slab: 161272 kB' 'SReclaimable: 63216 kB' 'SUnreclaim: 98056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.912 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.912 10:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # continue 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.913 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.913 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.913 10:30:47 -- setup/common.sh@33 -- # echo 0 00:03:16.913 10:30:47 -- setup/common.sh@33 -- # return 0 00:03:16.913 10:30:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.913 node0=1024 expecting 1024 00:03:16.913 10:30:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.913 10:30:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.913 10:30:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.913 10:30:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:16.913 10:30:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:16.913 00:03:16.913 real 0m1.193s 00:03:16.913 user 0m0.503s 00:03:16.913 sys 0m0.690s 00:03:16.913 10:30:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:16.913 ************************************ 00:03:16.913 END TEST no_shrink_alloc 00:03:16.913 ************************************ 00:03:16.913 10:30:47 -- common/autotest_common.sh@10 -- # set +x 00:03:16.913 10:30:47 -- setup/hugepages.sh@217 -- # clear_hp 00:03:16.913 10:30:47 -- setup/hugepages.sh@37 -- # local node hp 00:03:16.913 10:30:47 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:16.913 10:30:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.913 10:30:47 -- setup/hugepages.sh@41 -- # echo 0 00:03:16.913 10:30:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.913 10:30:47 -- setup/hugepages.sh@41 -- # echo 0 00:03:16.913 10:30:47 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:16.913 10:30:47 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:17.175 ************************************ 00:03:17.175 END TEST hugepages 00:03:17.175 ************************************ 00:03:17.175 00:03:17.175 real 0m5.428s 00:03:17.175 user 0m2.211s 00:03:17.175 sys 0m3.004s 00:03:17.175 10:30:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:17.175 10:30:47 -- common/autotest_common.sh@10 -- # set +x 00:03:17.175 10:30:47 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:17.175 10:30:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.175 10:30:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.175 10:30:47 -- common/autotest_common.sh@10 -- # set +x 00:03:17.175 ************************************ 00:03:17.175 START TEST driver 00:03:17.175 ************************************ 00:03:17.175 10:30:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:17.175 * Looking for test storage... 00:03:17.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:17.175 10:30:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:17.175 10:30:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:17.175 10:30:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:17.175 10:30:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:17.175 10:30:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:17.175 10:30:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:17.175 10:30:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:17.175 10:30:47 -- scripts/common.sh@335 -- # IFS=.-: 00:03:17.175 10:30:47 -- scripts/common.sh@335 -- # read -ra ver1 00:03:17.175 10:30:47 -- scripts/common.sh@336 -- # IFS=.-: 00:03:17.175 10:30:47 -- scripts/common.sh@336 -- # read -ra ver2 00:03:17.175 10:30:47 -- scripts/common.sh@337 -- # local 'op=<' 00:03:17.175 10:30:47 -- scripts/common.sh@339 -- # ver1_l=2 00:03:17.175 10:30:47 -- scripts/common.sh@340 -- # ver2_l=1 00:03:17.175 10:30:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:17.175 10:30:47 -- scripts/common.sh@343 -- # case "$op" in 00:03:17.175 10:30:47 -- scripts/common.sh@344 -- # : 1 00:03:17.175 10:30:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:17.175 10:30:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:17.175 10:30:47 -- scripts/common.sh@364 -- # decimal 1 00:03:17.175 10:30:47 -- scripts/common.sh@352 -- # local d=1 00:03:17.175 10:30:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:17.175 10:30:47 -- scripts/common.sh@354 -- # echo 1 00:03:17.175 10:30:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:17.175 10:30:47 -- scripts/common.sh@365 -- # decimal 2 00:03:17.175 10:30:47 -- scripts/common.sh@352 -- # local d=2 00:03:17.175 10:30:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:17.175 10:30:47 -- scripts/common.sh@354 -- # echo 2 00:03:17.175 10:30:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:17.175 10:30:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:17.175 10:30:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:17.175 10:30:47 -- scripts/common.sh@367 -- # return 0 00:03:17.175 10:30:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:17.175 10:30:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:17.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.175 --rc genhtml_branch_coverage=1 00:03:17.175 --rc genhtml_function_coverage=1 00:03:17.175 --rc genhtml_legend=1 00:03:17.175 --rc geninfo_all_blocks=1 00:03:17.175 --rc geninfo_unexecuted_blocks=1 00:03:17.175 00:03:17.175 ' 00:03:17.175 10:30:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:17.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.175 --rc genhtml_branch_coverage=1 00:03:17.175 --rc genhtml_function_coverage=1 00:03:17.175 --rc genhtml_legend=1 00:03:17.175 --rc geninfo_all_blocks=1 00:03:17.175 --rc geninfo_unexecuted_blocks=1 00:03:17.175 00:03:17.175 ' 00:03:17.175 10:30:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:17.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.175 --rc genhtml_branch_coverage=1 00:03:17.175 --rc genhtml_function_coverage=1 00:03:17.175 --rc genhtml_legend=1 00:03:17.175 --rc geninfo_all_blocks=1 00:03:17.175 --rc geninfo_unexecuted_blocks=1 00:03:17.175 00:03:17.175 ' 00:03:17.175 10:30:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:17.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.175 --rc genhtml_branch_coverage=1 00:03:17.175 --rc genhtml_function_coverage=1 00:03:17.175 --rc genhtml_legend=1 00:03:17.175 --rc geninfo_all_blocks=1 00:03:17.175 --rc geninfo_unexecuted_blocks=1 00:03:17.175 00:03:17.175 ' 00:03:17.175 10:30:47 -- setup/driver.sh@68 -- # setup reset 00:03:17.175 10:30:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.175 10:30:47 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:23.767 10:30:53 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:23.767 10:30:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:23.767 10:30:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:23.767 10:30:53 -- common/autotest_common.sh@10 -- # set +x 00:03:23.767 ************************************ 00:03:23.767 START TEST guess_driver 00:03:23.767 ************************************ 00:03:23.767 10:30:53 -- common/autotest_common.sh@1114 -- # guess_driver 00:03:23.767 10:30:53 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:23.767 10:30:53 -- setup/driver.sh@47 -- # local fail=0 00:03:23.767 10:30:53 -- setup/driver.sh@49 -- # pick_driver 00:03:23.767 10:30:53 -- setup/driver.sh@36 -- # vfio 00:03:23.767 10:30:53 -- setup/driver.sh@21 -- # local iommu_grups 00:03:23.767 10:30:53 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:23.767 10:30:53 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:23.767 10:30:53 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:23.767 10:30:53 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:23.767 10:30:53 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:23.767 10:30:53 -- setup/driver.sh@32 -- # return 1 00:03:23.767 10:30:53 -- setup/driver.sh@38 -- # uio 00:03:23.767 10:30:53 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:23.767 10:30:53 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:23.767 10:30:53 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:23.767 10:30:53 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:23.767 10:30:53 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:23.767 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:23.767 10:30:53 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:23.767 10:30:53 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:23.767 10:30:53 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:23.767 10:30:53 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:23.767 Looking for driver=uio_pci_generic 00:03:23.767 10:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:23.767 10:30:53 -- setup/driver.sh@45 -- # setup output config 00:03:23.767 10:30:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.767 10:30:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:24.029 10:30:54 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:24.029 10:30:54 -- setup/driver.sh@58 -- # continue 00:03:24.029 10:30:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.291 10:30:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.291 10:30:54 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:24.291 10:30:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.291 10:30:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.291 10:30:54 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:24.291 10:30:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.291 10:30:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.291 10:30:54 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:24.291 10:30:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.291 10:30:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.291 10:30:54 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:24.291 10:30:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.291 10:30:54 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:24.291 10:30:54 -- setup/driver.sh@65 -- # setup reset 00:03:24.291 10:30:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.291 10:30:54 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:30.952 ************************************ 00:03:30.952 END TEST guess_driver 00:03:30.952 ************************************ 00:03:30.952 00:03:30.952 real 0m7.126s 00:03:30.952 user 0m0.725s 00:03:30.952 sys 0m1.289s 00:03:30.952 10:31:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:30.952 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:30.952 ************************************ 00:03:30.952 END TEST driver 00:03:30.952 ************************************ 00:03:30.952 00:03:30.952 real 0m13.179s 00:03:30.952 user 0m1.104s 00:03:30.952 sys 0m2.001s 00:03:30.952 10:31:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:30.952 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:30.952 10:31:00 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:30.952 10:31:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.952 10:31:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.952 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:30.952 ************************************ 00:03:30.952 START TEST devices 00:03:30.952 ************************************ 00:03:30.952 10:31:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:30.952 * Looking for test storage... 00:03:30.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:30.952 10:31:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:30.952 10:31:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:30.952 10:31:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:30.952 10:31:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:30.952 10:31:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:30.952 10:31:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:30.952 10:31:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:30.952 10:31:00 -- scripts/common.sh@335 -- # IFS=.-: 00:03:30.952 10:31:00 -- scripts/common.sh@335 -- # read -ra ver1 00:03:30.952 10:31:00 -- scripts/common.sh@336 -- # IFS=.-: 00:03:30.952 10:31:00 -- scripts/common.sh@336 -- # read -ra ver2 00:03:30.952 10:31:00 -- scripts/common.sh@337 -- # local 'op=<' 00:03:30.952 10:31:00 -- scripts/common.sh@339 -- # ver1_l=2 00:03:30.952 10:31:00 -- scripts/common.sh@340 -- # ver2_l=1 00:03:30.952 10:31:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:30.952 10:31:00 -- scripts/common.sh@343 -- # case "$op" in 00:03:30.952 10:31:00 -- scripts/common.sh@344 -- # : 1 00:03:30.952 10:31:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:30.952 10:31:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:30.952 10:31:00 -- scripts/common.sh@364 -- # decimal 1 00:03:30.952 10:31:00 -- scripts/common.sh@352 -- # local d=1 00:03:30.952 10:31:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:30.952 10:31:00 -- scripts/common.sh@354 -- # echo 1 00:03:30.952 10:31:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:30.952 10:31:00 -- scripts/common.sh@365 -- # decimal 2 00:03:30.952 10:31:00 -- scripts/common.sh@352 -- # local d=2 00:03:30.952 10:31:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:30.952 10:31:00 -- scripts/common.sh@354 -- # echo 2 00:03:30.952 10:31:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:30.952 10:31:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:30.952 10:31:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:30.952 10:31:00 -- scripts/common.sh@367 -- # return 0 00:03:30.952 10:31:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:30.952 10:31:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:30.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.952 --rc genhtml_branch_coverage=1 00:03:30.952 --rc genhtml_function_coverage=1 00:03:30.952 --rc genhtml_legend=1 00:03:30.952 --rc geninfo_all_blocks=1 00:03:30.952 --rc geninfo_unexecuted_blocks=1 00:03:30.952 00:03:30.952 ' 00:03:30.952 10:31:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:30.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.952 --rc genhtml_branch_coverage=1 00:03:30.952 --rc genhtml_function_coverage=1 00:03:30.952 --rc genhtml_legend=1 00:03:30.952 --rc geninfo_all_blocks=1 00:03:30.952 --rc geninfo_unexecuted_blocks=1 00:03:30.952 00:03:30.952 ' 00:03:30.952 10:31:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:30.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.952 --rc genhtml_branch_coverage=1 00:03:30.952 --rc genhtml_function_coverage=1 00:03:30.952 --rc genhtml_legend=1 00:03:30.952 --rc geninfo_all_blocks=1 00:03:30.952 --rc geninfo_unexecuted_blocks=1 00:03:30.952 00:03:30.952 ' 00:03:30.952 10:31:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:30.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.952 --rc genhtml_branch_coverage=1 00:03:30.952 --rc genhtml_function_coverage=1 00:03:30.952 --rc genhtml_legend=1 00:03:30.952 --rc geninfo_all_blocks=1 00:03:30.952 --rc geninfo_unexecuted_blocks=1 00:03:30.952 00:03:30.952 ' 00:03:30.952 10:31:00 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:30.952 10:31:00 -- setup/devices.sh@192 -- # setup reset 00:03:30.952 10:31:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.952 10:31:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:31.525 10:31:02 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:31.525 10:31:02 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:31.525 10:31:02 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:31.525 10:31:02 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:31.525 10:31:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:31.525 10:31:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:03:31.525 10:31:02 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:03:31.525 10:31:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:03:31.525 10:31:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:31.525 10:31:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:31.525 10:31:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:31.525 10:31:02 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:31.525 10:31:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:31.525 10:31:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:31.525 10:31:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:31.525 10:31:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:31.525 10:31:02 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:31.525 10:31:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:31.525 10:31:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:31.525 10:31:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:31.525 10:31:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:31.525 10:31:02 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:31.525 10:31:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:31.525 10:31:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:31.525 10:31:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:31.525 10:31:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:31.525 10:31:02 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:31.525 10:31:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:31.525 10:31:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:31.525 10:31:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:31.525 10:31:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:31.525 10:31:02 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:31.525 10:31:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:31.525 10:31:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:31.525 10:31:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:31.525 10:31:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:31.525 10:31:02 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:31.525 10:31:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:31.525 10:31:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:31.525 10:31:02 -- setup/devices.sh@196 -- # blocks=() 00:03:31.525 10:31:02 -- setup/devices.sh@196 -- # declare -a blocks 00:03:31.525 10:31:02 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:31.525 10:31:02 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:31.525 10:31:02 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:31.525 10:31:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:31.525 10:31:02 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:31.525 10:31:02 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:31.525 10:31:02 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:03:31.525 10:31:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:31.525 10:31:02 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:31.525 10:31:02 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:31.525 10:31:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:31.525 No valid GPT data, bailing 00:03:31.525 10:31:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:31.525 10:31:02 -- scripts/common.sh@393 -- # pt= 00:03:31.525 10:31:02 -- scripts/common.sh@394 -- # return 1 00:03:31.525 10:31:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:31.525 10:31:02 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:31.525 10:31:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:31.525 10:31:02 -- setup/common.sh@80 -- # echo 1073741824 00:03:31.525 10:31:02 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:03:31.525 10:31:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:31.525 10:31:02 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:31.525 10:31:02 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:31.525 10:31:02 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:31.525 10:31:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:31.525 10:31:02 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:31.525 10:31:02 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:31.525 10:31:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:31.786 No valid GPT data, bailing 00:03:31.786 10:31:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:31.786 10:31:02 -- scripts/common.sh@393 -- # pt= 00:03:31.786 10:31:02 -- scripts/common.sh@394 -- # return 1 00:03:31.786 10:31:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:31.786 10:31:02 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:31.786 10:31:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:31.786 10:31:02 -- setup/common.sh@80 -- # echo 4294967296 00:03:31.786 10:31:02 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:31.786 10:31:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:31.786 10:31:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:31.786 10:31:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:31.786 10:31:02 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:31.786 10:31:02 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:31.786 10:31:02 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:31.786 10:31:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:31.786 10:31:02 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:03:31.786 10:31:02 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:03:31.786 10:31:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:03:31.786 No valid GPT data, bailing 00:03:31.786 10:31:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:31.786 10:31:02 -- scripts/common.sh@393 -- # pt= 00:03:31.786 10:31:02 -- scripts/common.sh@394 -- # return 1 00:03:31.786 10:31:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:03:31.786 10:31:02 -- setup/common.sh@76 -- # local dev=nvme1n2 00:03:31.786 10:31:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:03:31.786 10:31:02 -- setup/common.sh@80 -- # echo 4294967296 00:03:31.786 10:31:02 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:31.786 10:31:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:31.786 10:31:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:31.786 10:31:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:31.786 10:31:02 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:03:31.786 10:31:02 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:31.786 10:31:02 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:31.786 10:31:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:31.786 10:31:02 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:03:31.786 10:31:02 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:03:31.786 10:31:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:03:31.786 No valid GPT data, bailing 00:03:31.786 10:31:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:31.786 10:31:02 -- scripts/common.sh@393 -- # pt= 00:03:31.786 10:31:02 -- scripts/common.sh@394 -- # return 1 00:03:31.786 10:31:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:03:31.786 10:31:02 -- setup/common.sh@76 -- # local dev=nvme1n3 00:03:31.786 10:31:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:03:31.786 10:31:02 -- setup/common.sh@80 -- # echo 4294967296 00:03:31.786 10:31:02 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:31.787 10:31:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:31.787 10:31:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:31.787 10:31:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:31.787 10:31:02 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:03:31.787 10:31:02 -- setup/devices.sh@201 -- # ctrl=nvme2 00:03:31.787 10:31:02 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:03:31.787 10:31:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:31.787 10:31:02 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:03:31.787 10:31:02 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:03:31.787 10:31:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:03:32.048 No valid GPT data, bailing 00:03:32.048 10:31:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:32.048 10:31:02 -- scripts/common.sh@393 -- # pt= 00:03:32.048 10:31:02 -- scripts/common.sh@394 -- # return 1 00:03:32.048 10:31:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:03:32.048 10:31:02 -- setup/common.sh@76 -- # local dev=nvme2n1 00:03:32.048 10:31:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:03:32.048 10:31:02 -- setup/common.sh@80 -- # echo 6343335936 00:03:32.048 10:31:02 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:03:32.048 10:31:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:32.048 10:31:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:03:32.048 10:31:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:32.048 10:31:02 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:03:32.048 10:31:02 -- setup/devices.sh@201 -- # ctrl=nvme3 00:03:32.048 10:31:02 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:32.048 10:31:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:32.048 10:31:02 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:03:32.048 10:31:02 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:03:32.048 10:31:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:03:32.048 No valid GPT data, bailing 00:03:32.048 10:31:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:32.048 10:31:02 -- scripts/common.sh@393 -- # pt= 00:03:32.048 10:31:02 -- scripts/common.sh@394 -- # return 1 00:03:32.048 10:31:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:03:32.048 10:31:02 -- setup/common.sh@76 -- # local dev=nvme3n1 00:03:32.048 10:31:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:03:32.048 10:31:02 -- setup/common.sh@80 -- # echo 5368709120 00:03:32.048 10:31:02 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:32.048 10:31:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:32.048 10:31:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:32.048 10:31:02 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:03:32.048 10:31:02 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:03:32.048 10:31:02 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:32.048 10:31:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.048 10:31:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.048 10:31:02 -- common/autotest_common.sh@10 -- # set +x 00:03:32.048 ************************************ 00:03:32.048 START TEST nvme_mount 00:03:32.048 ************************************ 00:03:32.048 10:31:02 -- common/autotest_common.sh@1114 -- # nvme_mount 00:03:32.048 10:31:02 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:03:32.048 10:31:02 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:03:32.048 10:31:02 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:32.048 10:31:02 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:32.048 10:31:02 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:03:32.048 10:31:02 -- setup/common.sh@39 -- # local disk=nvme1n1 00:03:32.048 10:31:02 -- setup/common.sh@40 -- # local part_no=1 00:03:32.048 10:31:02 -- setup/common.sh@41 -- # local size=1073741824 00:03:32.048 10:31:02 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:32.048 10:31:02 -- setup/common.sh@44 -- # parts=() 00:03:32.048 10:31:02 -- setup/common.sh@44 -- # local parts 00:03:32.048 10:31:02 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:32.048 10:31:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.048 10:31:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:32.048 10:31:02 -- setup/common.sh@46 -- # (( part++ )) 00:03:32.048 10:31:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.048 10:31:02 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:32.048 10:31:02 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:03:32.048 10:31:02 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:03:33.002 Creating new GPT entries in memory. 00:03:33.002 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:33.002 other utilities. 00:03:33.002 10:31:03 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:33.002 10:31:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.002 10:31:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:33.002 10:31:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:33.002 10:31:03 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:03:34.405 Creating new GPT entries in memory. 00:03:34.405 The operation has completed successfully. 00:03:34.405 10:31:04 -- setup/common.sh@57 -- # (( part++ )) 00:03:34.406 10:31:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.406 10:31:04 -- setup/common.sh@62 -- # wait 53727 00:03:34.406 10:31:04 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:34.406 10:31:04 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:34.406 10:31:04 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:34.406 10:31:04 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:03:34.406 10:31:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:03:34.406 10:31:04 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:34.406 10:31:04 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:34.406 10:31:04 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:03:34.406 10:31:04 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:03:34.406 10:31:04 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:34.406 10:31:04 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:34.406 10:31:04 -- setup/devices.sh@53 -- # local found=0 00:03:34.406 10:31:04 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:34.406 10:31:04 -- setup/devices.sh@56 -- # : 00:03:34.406 10:31:04 -- setup/devices.sh@59 -- # local pci status 00:03:34.406 10:31:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.406 10:31:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:03:34.406 10:31:04 -- setup/devices.sh@47 -- # setup output config 00:03:34.406 10:31:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.406 10:31:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:34.406 10:31:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:34.406 10:31:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.671 10:31:05 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:34.671 10:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.671 10:31:05 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:34.671 10:31:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:03:34.671 10:31:05 -- setup/devices.sh@63 -- # found=1 00:03:34.671 10:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.671 10:31:05 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:34.671 10:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.931 10:31:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:34.931 10:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.931 10:31:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:34.931 10:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.931 10:31:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:34.931 10:31:05 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:34.931 10:31:05 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.193 10:31:05 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:35.193 10:31:05 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:35.193 10:31:05 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:35.193 10:31:05 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.193 10:31:05 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.193 10:31:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:03:35.193 10:31:05 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:03:35.193 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:35.193 10:31:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:03:35.193 10:31:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:03:35.455 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:35.455 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:35.455 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:35.455 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:03:35.455 10:31:05 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:35.455 10:31:05 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:35.455 10:31:05 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.455 10:31:05 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:03:35.455 10:31:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:03:35.455 10:31:05 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.455 10:31:05 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:35.455 10:31:05 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:03:35.455 10:31:05 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:03:35.455 10:31:05 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:35.455 10:31:05 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:35.455 10:31:05 -- setup/devices.sh@53 -- # local found=0 00:03:35.455 10:31:05 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:35.455 10:31:05 -- setup/devices.sh@56 -- # : 00:03:35.455 10:31:05 -- setup/devices.sh@59 -- # local pci status 00:03:35.455 10:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.455 10:31:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:03:35.455 10:31:05 -- setup/devices.sh@47 -- # setup output config 00:03:35.455 10:31:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.455 10:31:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:35.717 10:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:35.717 10:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.717 10:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:35.717 10:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.978 10:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:35.978 10:31:06 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:03:35.978 10:31:06 -- setup/devices.sh@63 -- # found=1 00:03:35.978 10:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.978 10:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:35.978 10:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.239 10:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:36.239 10:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.239 10:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:36.239 10:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.239 10:31:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:36.239 10:31:06 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:36.239 10:31:06 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:36.239 10:31:06 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.239 10:31:06 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:36.239 10:31:06 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:36.239 10:31:06 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:03:36.239 10:31:06 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:03:36.239 10:31:06 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:03:36.239 10:31:06 -- setup/devices.sh@50 -- # local mount_point= 00:03:36.239 10:31:06 -- setup/devices.sh@51 -- # local test_file= 00:03:36.239 10:31:06 -- setup/devices.sh@53 -- # local found=0 00:03:36.239 10:31:06 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:36.239 10:31:06 -- setup/devices.sh@59 -- # local pci status 00:03:36.239 10:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.239 10:31:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:03:36.239 10:31:06 -- setup/devices.sh@47 -- # setup output config 00:03:36.239 10:31:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.239 10:31:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:36.500 10:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:36.500 10:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.500 10:31:07 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:36.500 10:31:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.762 10:31:07 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:36.762 10:31:07 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:03:36.762 10:31:07 -- setup/devices.sh@63 -- # found=1 00:03:36.762 10:31:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.762 10:31:07 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:36.762 10:31:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.023 10:31:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:37.023 10:31:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.023 10:31:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:37.023 10:31:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.023 10:31:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.023 10:31:07 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:37.023 10:31:07 -- setup/devices.sh@68 -- # return 0 00:03:37.023 10:31:07 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:37.023 10:31:07 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:37.023 10:31:07 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:03:37.023 10:31:07 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:03:37.023 10:31:07 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:03:37.284 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:37.284 ************************************ 00:03:37.284 END TEST nvme_mount 00:03:37.284 ************************************ 00:03:37.284 00:03:37.284 real 0m5.107s 00:03:37.284 user 0m0.959s 00:03:37.284 sys 0m1.370s 00:03:37.284 10:31:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:37.284 10:31:07 -- common/autotest_common.sh@10 -- # set +x 00:03:37.284 10:31:07 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:37.284 10:31:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.284 10:31:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.284 10:31:07 -- common/autotest_common.sh@10 -- # set +x 00:03:37.284 ************************************ 00:03:37.284 START TEST dm_mount 00:03:37.284 ************************************ 00:03:37.284 10:31:07 -- common/autotest_common.sh@1114 -- # dm_mount 00:03:37.284 10:31:07 -- setup/devices.sh@144 -- # pv=nvme1n1 00:03:37.284 10:31:07 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:03:37.284 10:31:07 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:03:37.284 10:31:07 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:03:37.284 10:31:07 -- setup/common.sh@39 -- # local disk=nvme1n1 00:03:37.284 10:31:07 -- setup/common.sh@40 -- # local part_no=2 00:03:37.284 10:31:07 -- setup/common.sh@41 -- # local size=1073741824 00:03:37.284 10:31:07 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:37.284 10:31:07 -- setup/common.sh@44 -- # parts=() 00:03:37.284 10:31:07 -- setup/common.sh@44 -- # local parts 00:03:37.284 10:31:07 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:37.284 10:31:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.284 10:31:07 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:37.284 10:31:07 -- setup/common.sh@46 -- # (( part++ )) 00:03:37.284 10:31:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.284 10:31:07 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:37.284 10:31:07 -- setup/common.sh@46 -- # (( part++ )) 00:03:37.284 10:31:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.284 10:31:07 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:37.284 10:31:07 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:03:37.284 10:31:07 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:03:38.225 Creating new GPT entries in memory. 00:03:38.225 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:38.225 other utilities. 00:03:38.225 10:31:08 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:38.225 10:31:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:38.225 10:31:08 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:38.225 10:31:08 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:38.225 10:31:08 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:03:39.610 Creating new GPT entries in memory. 00:03:39.610 The operation has completed successfully. 00:03:39.610 10:31:09 -- setup/common.sh@57 -- # (( part++ )) 00:03:39.610 10:31:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:39.610 10:31:09 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:39.610 10:31:09 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:39.610 10:31:09 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:03:40.549 The operation has completed successfully. 00:03:40.549 10:31:10 -- setup/common.sh@57 -- # (( part++ )) 00:03:40.549 10:31:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.549 10:31:10 -- setup/common.sh@62 -- # wait 54356 00:03:40.549 10:31:10 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:40.549 10:31:10 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:40.549 10:31:10 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:40.549 10:31:10 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:40.549 10:31:10 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:40.549 10:31:10 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:40.549 10:31:10 -- setup/devices.sh@161 -- # break 00:03:40.549 10:31:10 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:40.549 10:31:10 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:40.549 10:31:10 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:40.549 10:31:10 -- setup/devices.sh@166 -- # dm=dm-0 00:03:40.549 10:31:10 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:03:40.549 10:31:10 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:03:40.549 10:31:10 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:40.549 10:31:10 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:40.549 10:31:10 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:40.549 10:31:10 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:40.549 10:31:10 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:40.549 10:31:10 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:40.549 10:31:11 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:40.549 10:31:11 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:03:40.549 10:31:11 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:03:40.549 10:31:11 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:40.549 10:31:11 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:40.549 10:31:11 -- setup/devices.sh@53 -- # local found=0 00:03:40.549 10:31:11 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:40.549 10:31:11 -- setup/devices.sh@56 -- # : 00:03:40.549 10:31:11 -- setup/devices.sh@59 -- # local pci status 00:03:40.549 10:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.549 10:31:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:03:40.549 10:31:11 -- setup/devices.sh@47 -- # setup output config 00:03:40.549 10:31:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.549 10:31:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:40.549 10:31:11 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:40.549 10:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.812 10:31:11 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:40.812 10:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.074 10:31:11 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:41.075 10:31:11 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:41.075 10:31:11 -- setup/devices.sh@63 -- # found=1 00:03:41.075 10:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.075 10:31:11 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:41.075 10:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.336 10:31:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:41.336 10:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.336 10:31:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:41.336 10:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.336 10:31:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:41.336 10:31:11 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:41.336 10:31:11 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:41.336 10:31:11 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:41.336 10:31:11 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:41.336 10:31:11 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:41.610 10:31:12 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:03:41.610 10:31:12 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:03:41.610 10:31:12 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:03:41.610 10:31:12 -- setup/devices.sh@50 -- # local mount_point= 00:03:41.610 10:31:12 -- setup/devices.sh@51 -- # local test_file= 00:03:41.610 10:31:12 -- setup/devices.sh@53 -- # local found=0 00:03:41.610 10:31:12 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:41.610 10:31:12 -- setup/devices.sh@59 -- # local pci status 00:03:41.610 10:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.610 10:31:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:03:41.610 10:31:12 -- setup/devices.sh@47 -- # setup output config 00:03:41.610 10:31:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.610 10:31:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:41.872 10:31:12 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:41.872 10:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.872 10:31:12 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:41.872 10:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.133 10:31:12 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:42.133 10:31:12 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:03:42.133 10:31:12 -- setup/devices.sh@63 -- # found=1 00:03:42.133 10:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.133 10:31:12 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:42.133 10:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.394 10:31:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:42.394 10:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.394 10:31:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:42.394 10:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.394 10:31:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:42.394 10:31:12 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:42.394 10:31:12 -- setup/devices.sh@68 -- # return 0 00:03:42.394 10:31:12 -- setup/devices.sh@187 -- # cleanup_dm 00:03:42.394 10:31:12 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:42.394 10:31:12 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:42.394 10:31:12 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:42.394 10:31:12 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:03:42.394 10:31:12 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:03:42.394 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:42.394 10:31:12 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:03:42.394 10:31:12 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:03:42.394 00:03:42.394 real 0m5.253s 00:03:42.394 user 0m0.673s 00:03:42.394 sys 0m0.923s 00:03:42.394 10:31:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:42.394 ************************************ 00:03:42.394 END TEST dm_mount 00:03:42.394 ************************************ 00:03:42.394 10:31:12 -- common/autotest_common.sh@10 -- # set +x 00:03:42.655 10:31:13 -- setup/devices.sh@1 -- # cleanup 00:03:42.655 10:31:13 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:42.655 10:31:13 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.655 10:31:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:03:42.655 10:31:13 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:03:42.655 10:31:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:03:42.655 10:31:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:03:42.916 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:42.916 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:42.916 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:42.916 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:03:42.916 10:31:13 -- setup/devices.sh@12 -- # cleanup_dm 00:03:42.916 10:31:13 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:42.916 10:31:13 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:42.916 10:31:13 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:03:42.916 10:31:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:03:42.916 10:31:13 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:03:42.916 10:31:13 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:03:42.916 00:03:42.916 real 0m12.538s 00:03:42.916 user 0m2.431s 00:03:42.916 sys 0m3.035s 00:03:42.916 10:31:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:42.916 10:31:13 -- common/autotest_common.sh@10 -- # set +x 00:03:42.916 ************************************ 00:03:42.916 END TEST devices 00:03:42.916 ************************************ 00:03:42.916 ************************************ 00:03:42.916 END TEST setup.sh 00:03:42.916 ************************************ 00:03:42.916 00:03:42.916 real 0m42.640s 00:03:42.916 user 0m8.185s 00:03:42.916 sys 0m11.436s 00:03:42.916 10:31:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:42.916 10:31:13 -- common/autotest_common.sh@10 -- # set +x 00:03:42.916 10:31:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:43.178 Hugepages 00:03:43.178 node hugesize free / total 00:03:43.178 node0 1048576kB 0 / 0 00:03:43.178 node0 2048kB 2048 / 2048 00:03:43.178 00:03:43.178 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:43.178 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:43.178 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:43.178 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:43.178 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:43.440 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:43.440 10:31:13 -- spdk/autotest.sh@128 -- # uname -s 00:03:43.440 10:31:13 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:03:43.440 10:31:13 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:03:43.440 10:31:13 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.273 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.273 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.273 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.273 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.273 10:31:14 -- common/autotest_common.sh@1527 -- # sleep 1 00:03:45.658 10:31:15 -- common/autotest_common.sh@1528 -- # bdfs=() 00:03:45.658 10:31:15 -- common/autotest_common.sh@1528 -- # local bdfs 00:03:45.658 10:31:15 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:03:45.658 10:31:15 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:03:45.658 10:31:15 -- common/autotest_common.sh@1508 -- # bdfs=() 00:03:45.658 10:31:15 -- common/autotest_common.sh@1508 -- # local bdfs 00:03:45.658 10:31:15 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:45.658 10:31:15 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:45.658 10:31:15 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:03:45.658 10:31:15 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:03:45.658 10:31:15 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:03:45.658 10:31:15 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:45.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.919 Waiting for block devices as requested 00:03:45.919 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:03:45.919 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:03:45.919 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:03:46.180 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:03:51.548 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:03:51.548 10:31:21 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:03:51.548 10:31:21 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:03:51.548 10:31:21 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:51.548 10:31:21 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:03:51.548 10:31:21 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:03:51.548 10:31:21 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:03:51.548 10:31:21 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:03:51.548 10:31:21 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:03:51.548 10:31:21 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:03:51.548 10:31:21 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:03:51.548 10:31:21 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:03:51.548 10:31:21 -- common/autotest_common.sh@1540 -- # grep oacs 00:03:51.548 10:31:21 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.548 10:31:21 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:03:51.548 10:31:21 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:03:51.548 10:31:21 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:03:51.548 10:31:21 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:03:51.548 10:31:21 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:03:51.548 10:31:21 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:03:51.548 10:31:21 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:03:51.548 10:31:21 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:03:51.548 10:31:21 -- common/autotest_common.sh@1552 -- # continue 00:03:51.548 10:31:21 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:03:51.548 10:31:21 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:03:51.548 10:31:21 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:51.548 10:31:21 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:03:51.548 10:31:21 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:03:51.548 10:31:21 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:03:51.548 10:31:21 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:03:51.548 10:31:21 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:03:51.548 10:31:21 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:03:51.548 10:31:21 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:03:51.548 10:31:21 -- common/autotest_common.sh@1540 -- # grep oacs 00:03:51.548 10:31:21 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.548 10:31:21 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:03:51.548 10:31:21 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:03:51.548 10:31:21 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:03:51.548 10:31:21 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:03:51.548 10:31:21 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:03:51.548 10:31:21 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:03:51.548 10:31:21 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:03:51.548 10:31:21 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:03:51.548 10:31:21 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:03:51.548 10:31:21 -- common/autotest_common.sh@1552 -- # continue 00:03:51.548 10:31:21 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:03:51.548 10:31:21 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:03:51.548 10:31:21 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:51.548 10:31:21 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:03:51.548 10:31:21 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:03:51.548 10:31:21 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:03:51.548 10:31:21 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:03:51.548 10:31:21 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:03:51.549 10:31:21 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:03:51.549 10:31:21 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:03:51.549 10:31:21 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:51.549 10:31:21 -- common/autotest_common.sh@1540 -- # grep oacs 00:03:51.549 10:31:21 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.549 10:31:21 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:03:51.549 10:31:21 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:03:51.549 10:31:21 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:03:51.549 10:31:21 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:03:51.549 10:31:21 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:03:51.549 10:31:21 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:03:51.549 10:31:21 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:03:51.549 10:31:21 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:03:51.549 10:31:21 -- common/autotest_common.sh@1552 -- # continue 00:03:51.549 10:31:21 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:03:51.549 10:31:21 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:03:51.549 10:31:21 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:51.549 10:31:21 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:03:51.549 10:31:21 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:03:51.549 10:31:21 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:03:51.549 10:31:21 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:03:51.549 10:31:21 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:03:51.549 10:31:21 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:03:51.549 10:31:21 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:03:51.549 10:31:21 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:51.549 10:31:21 -- common/autotest_common.sh@1540 -- # grep oacs 00:03:51.549 10:31:21 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.549 10:31:21 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:03:51.549 10:31:21 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:03:51.549 10:31:21 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:03:51.549 10:31:21 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:03:51.549 10:31:21 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:03:51.549 10:31:21 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:03:51.549 10:31:21 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:03:51.549 10:31:21 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:03:51.549 10:31:21 -- common/autotest_common.sh@1552 -- # continue 00:03:51.549 10:31:21 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:03:51.549 10:31:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:51.549 10:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:51.549 10:31:21 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:03:51.549 10:31:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:51.549 10:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:51.549 10:31:21 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:52.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.121 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.121 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.121 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.121 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.121 10:31:22 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:03:52.121 10:31:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:52.121 10:31:22 -- common/autotest_common.sh@10 -- # set +x 00:03:52.382 10:31:22 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:03:52.382 10:31:22 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:03:52.382 10:31:22 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:03:52.382 10:31:22 -- common/autotest_common.sh@1572 -- # bdfs=() 00:03:52.382 10:31:22 -- common/autotest_common.sh@1572 -- # local bdfs 00:03:52.382 10:31:22 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:03:52.382 10:31:22 -- common/autotest_common.sh@1508 -- # bdfs=() 00:03:52.382 10:31:22 -- common/autotest_common.sh@1508 -- # local bdfs 00:03:52.382 10:31:22 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:52.382 10:31:22 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:52.382 10:31:22 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:03:52.382 10:31:22 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:03:52.382 10:31:22 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:03:52.382 10:31:22 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:03:52.382 10:31:22 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:03:52.382 10:31:22 -- common/autotest_common.sh@1575 -- # device=0x0010 00:03:52.382 10:31:22 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:52.382 10:31:22 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:03:52.382 10:31:22 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:03:52.382 10:31:22 -- common/autotest_common.sh@1575 -- # device=0x0010 00:03:52.382 10:31:22 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:52.382 10:31:22 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:03:52.382 10:31:22 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:03:52.382 10:31:22 -- common/autotest_common.sh@1575 -- # device=0x0010 00:03:52.382 10:31:22 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:52.382 10:31:22 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:03:52.382 10:31:22 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:03:52.382 10:31:22 -- common/autotest_common.sh@1575 -- # device=0x0010 00:03:52.382 10:31:22 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:52.382 10:31:22 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:03:52.382 10:31:22 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:03:52.382 10:31:22 -- common/autotest_common.sh@1588 -- # return 0 00:03:52.382 10:31:22 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:03:52.382 10:31:22 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:03:52.382 10:31:22 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:03:52.382 10:31:22 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:03:52.382 10:31:22 -- spdk/autotest.sh@160 -- # timing_enter lib 00:03:52.382 10:31:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:52.382 10:31:22 -- common/autotest_common.sh@10 -- # set +x 00:03:52.382 10:31:22 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:52.382 10:31:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.382 10:31:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.382 10:31:22 -- common/autotest_common.sh@10 -- # set +x 00:03:52.382 ************************************ 00:03:52.382 START TEST env 00:03:52.382 ************************************ 00:03:52.382 10:31:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:52.382 * Looking for test storage... 00:03:52.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:52.382 10:31:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:52.382 10:31:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:52.382 10:31:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:52.382 10:31:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:52.382 10:31:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:52.382 10:31:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:52.382 10:31:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:52.382 10:31:22 -- scripts/common.sh@335 -- # IFS=.-: 00:03:52.382 10:31:22 -- scripts/common.sh@335 -- # read -ra ver1 00:03:52.382 10:31:22 -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.382 10:31:22 -- scripts/common.sh@336 -- # read -ra ver2 00:03:52.382 10:31:22 -- scripts/common.sh@337 -- # local 'op=<' 00:03:52.382 10:31:22 -- scripts/common.sh@339 -- # ver1_l=2 00:03:52.382 10:31:22 -- scripts/common.sh@340 -- # ver2_l=1 00:03:52.382 10:31:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:52.382 10:31:22 -- scripts/common.sh@343 -- # case "$op" in 00:03:52.382 10:31:22 -- scripts/common.sh@344 -- # : 1 00:03:52.382 10:31:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:52.382 10:31:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.382 10:31:22 -- scripts/common.sh@364 -- # decimal 1 00:03:52.382 10:31:22 -- scripts/common.sh@352 -- # local d=1 00:03:52.382 10:31:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.382 10:31:22 -- scripts/common.sh@354 -- # echo 1 00:03:52.382 10:31:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:52.382 10:31:22 -- scripts/common.sh@365 -- # decimal 2 00:03:52.382 10:31:22 -- scripts/common.sh@352 -- # local d=2 00:03:52.382 10:31:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.382 10:31:22 -- scripts/common.sh@354 -- # echo 2 00:03:52.382 10:31:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:52.382 10:31:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:52.382 10:31:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:52.382 10:31:22 -- scripts/common.sh@367 -- # return 0 00:03:52.382 10:31:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.382 10:31:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:52.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.383 --rc genhtml_branch_coverage=1 00:03:52.383 --rc genhtml_function_coverage=1 00:03:52.383 --rc genhtml_legend=1 00:03:52.383 --rc geninfo_all_blocks=1 00:03:52.383 --rc geninfo_unexecuted_blocks=1 00:03:52.383 00:03:52.383 ' 00:03:52.383 10:31:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:52.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.383 --rc genhtml_branch_coverage=1 00:03:52.383 --rc genhtml_function_coverage=1 00:03:52.383 --rc genhtml_legend=1 00:03:52.383 --rc geninfo_all_blocks=1 00:03:52.383 --rc geninfo_unexecuted_blocks=1 00:03:52.383 00:03:52.383 ' 00:03:52.383 10:31:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:52.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.383 --rc genhtml_branch_coverage=1 00:03:52.383 --rc genhtml_function_coverage=1 00:03:52.383 --rc genhtml_legend=1 00:03:52.383 --rc geninfo_all_blocks=1 00:03:52.383 --rc geninfo_unexecuted_blocks=1 00:03:52.383 00:03:52.383 ' 00:03:52.383 10:31:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:52.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.383 --rc genhtml_branch_coverage=1 00:03:52.383 --rc genhtml_function_coverage=1 00:03:52.383 --rc genhtml_legend=1 00:03:52.383 --rc geninfo_all_blocks=1 00:03:52.383 --rc geninfo_unexecuted_blocks=1 00:03:52.383 00:03:52.383 ' 00:03:52.383 10:31:22 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:52.383 10:31:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.383 10:31:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.383 10:31:22 -- common/autotest_common.sh@10 -- # set +x 00:03:52.383 ************************************ 00:03:52.383 START TEST env_memory 00:03:52.383 ************************************ 00:03:52.383 10:31:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:52.642 00:03:52.642 00:03:52.642 CUnit - A unit testing framework for C - Version 2.1-3 00:03:52.642 http://cunit.sourceforge.net/ 00:03:52.642 00:03:52.642 00:03:52.642 Suite: memory 00:03:52.642 Test: alloc and free memory map ...[2024-12-03 10:31:23.033457] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:52.642 passed 00:03:52.642 Test: mem map translation ...[2024-12-03 10:31:23.073283] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:52.642 [2024-12-03 10:31:23.073331] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:52.642 [2024-12-03 10:31:23.073390] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:52.643 [2024-12-03 10:31:23.073405] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:52.643 passed 00:03:52.643 Test: mem map registration ...[2024-12-03 10:31:23.141986] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:52.643 [2024-12-03 10:31:23.142028] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:52.643 passed 00:03:52.643 Test: mem map adjacent registrations ...passed 00:03:52.643 00:03:52.643 Run Summary: Type Total Ran Passed Failed Inactive 00:03:52.643 suites 1 1 n/a 0 0 00:03:52.643 tests 4 4 4 0 0 00:03:52.643 asserts 152 152 152 0 n/a 00:03:52.643 00:03:52.643 Elapsed time = 0.235 seconds 00:03:52.643 00:03:52.643 real 0m0.268s 00:03:52.643 user 0m0.236s 00:03:52.643 sys 0m0.024s 00:03:52.643 10:31:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:52.643 10:31:23 -- common/autotest_common.sh@10 -- # set +x 00:03:52.643 ************************************ 00:03:52.643 END TEST env_memory 00:03:52.643 ************************************ 00:03:52.903 10:31:23 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:52.903 10:31:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.903 10:31:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.903 10:31:23 -- common/autotest_common.sh@10 -- # set +x 00:03:52.903 ************************************ 00:03:52.903 START TEST env_vtophys 00:03:52.903 ************************************ 00:03:52.903 10:31:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:52.903 EAL: lib.eal log level changed from notice to debug 00:03:52.903 EAL: Detected lcore 0 as core 0 on socket 0 00:03:52.903 EAL: Detected lcore 1 as core 0 on socket 0 00:03:52.903 EAL: Detected lcore 2 as core 0 on socket 0 00:03:52.903 EAL: Detected lcore 3 as core 0 on socket 0 00:03:52.903 EAL: Detected lcore 4 as core 0 on socket 0 00:03:52.903 EAL: Detected lcore 5 as core 0 on socket 0 00:03:52.903 EAL: Detected lcore 6 as core 0 on socket 0 00:03:52.903 EAL: Detected lcore 7 as core 0 on socket 0 00:03:52.903 EAL: Detected lcore 8 as core 0 on socket 0 00:03:52.903 EAL: Detected lcore 9 as core 0 on socket 0 00:03:52.903 EAL: Maximum logical cores by configuration: 128 00:03:52.903 EAL: Detected CPU lcores: 10 00:03:52.903 EAL: Detected NUMA nodes: 1 00:03:52.903 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:52.903 EAL: Detected shared linkage of DPDK 00:03:52.903 EAL: No shared files mode enabled, IPC will be disabled 00:03:52.903 EAL: Selected IOVA mode 'PA' 00:03:52.903 EAL: Probing VFIO support... 00:03:52.903 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:52.903 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:52.903 EAL: Ask a virtual area of 0x2e000 bytes 00:03:52.903 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:52.903 EAL: Setting up physically contiguous memory... 00:03:52.903 EAL: Setting maximum number of open files to 524288 00:03:52.903 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:52.903 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:52.903 EAL: Ask a virtual area of 0x61000 bytes 00:03:52.903 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:52.903 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:52.903 EAL: Ask a virtual area of 0x400000000 bytes 00:03:52.903 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:52.903 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:52.903 EAL: Ask a virtual area of 0x61000 bytes 00:03:52.903 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:52.903 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:52.903 EAL: Ask a virtual area of 0x400000000 bytes 00:03:52.903 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:52.903 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:52.903 EAL: Ask a virtual area of 0x61000 bytes 00:03:52.903 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:52.903 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:52.903 EAL: Ask a virtual area of 0x400000000 bytes 00:03:52.903 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:52.903 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:52.903 EAL: Ask a virtual area of 0x61000 bytes 00:03:52.903 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:52.903 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:52.903 EAL: Ask a virtual area of 0x400000000 bytes 00:03:52.903 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:52.903 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:52.903 EAL: Hugepages will be freed exactly as allocated. 00:03:52.903 EAL: No shared files mode enabled, IPC is disabled 00:03:52.903 EAL: No shared files mode enabled, IPC is disabled 00:03:52.903 EAL: TSC frequency is ~2600000 KHz 00:03:52.903 EAL: Main lcore 0 is ready (tid=7fac3f700a40;cpuset=[0]) 00:03:52.903 EAL: Trying to obtain current memory policy. 00:03:52.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.903 EAL: Restoring previous memory policy: 0 00:03:52.903 EAL: request: mp_malloc_sync 00:03:52.903 EAL: No shared files mode enabled, IPC is disabled 00:03:52.903 EAL: Heap on socket 0 was expanded by 2MB 00:03:52.903 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:52.903 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:52.903 EAL: Mem event callback 'spdk:(nil)' registered 00:03:52.903 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:52.903 00:03:52.903 00:03:52.903 CUnit - A unit testing framework for C - Version 2.1-3 00:03:52.903 http://cunit.sourceforge.net/ 00:03:52.903 00:03:52.903 00:03:52.903 Suite: components_suite 00:03:53.164 Test: vtophys_malloc_test ...passed 00:03:53.164 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:53.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.164 EAL: Restoring previous memory policy: 4 00:03:53.164 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.164 EAL: request: mp_malloc_sync 00:03:53.164 EAL: No shared files mode enabled, IPC is disabled 00:03:53.164 EAL: Heap on socket 0 was expanded by 4MB 00:03:53.164 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.164 EAL: request: mp_malloc_sync 00:03:53.164 EAL: No shared files mode enabled, IPC is disabled 00:03:53.164 EAL: Heap on socket 0 was shrunk by 4MB 00:03:53.164 EAL: Trying to obtain current memory policy. 00:03:53.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.164 EAL: Restoring previous memory policy: 4 00:03:53.164 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.164 EAL: request: mp_malloc_sync 00:03:53.164 EAL: No shared files mode enabled, IPC is disabled 00:03:53.164 EAL: Heap on socket 0 was expanded by 6MB 00:03:53.164 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.164 EAL: request: mp_malloc_sync 00:03:53.164 EAL: No shared files mode enabled, IPC is disabled 00:03:53.164 EAL: Heap on socket 0 was shrunk by 6MB 00:03:53.164 EAL: Trying to obtain current memory policy. 00:03:53.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.164 EAL: Restoring previous memory policy: 4 00:03:53.164 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.164 EAL: request: mp_malloc_sync 00:03:53.164 EAL: No shared files mode enabled, IPC is disabled 00:03:53.164 EAL: Heap on socket 0 was expanded by 10MB 00:03:53.425 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.425 EAL: request: mp_malloc_sync 00:03:53.425 EAL: No shared files mode enabled, IPC is disabled 00:03:53.425 EAL: Heap on socket 0 was shrunk by 10MB 00:03:53.425 EAL: Trying to obtain current memory policy. 00:03:53.425 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.425 EAL: Restoring previous memory policy: 4 00:03:53.425 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.425 EAL: request: mp_malloc_sync 00:03:53.425 EAL: No shared files mode enabled, IPC is disabled 00:03:53.425 EAL: Heap on socket 0 was expanded by 18MB 00:03:53.425 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.425 EAL: request: mp_malloc_sync 00:03:53.425 EAL: No shared files mode enabled, IPC is disabled 00:03:53.425 EAL: Heap on socket 0 was shrunk by 18MB 00:03:53.425 EAL: Trying to obtain current memory policy. 00:03:53.425 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.425 EAL: Restoring previous memory policy: 4 00:03:53.425 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.425 EAL: request: mp_malloc_sync 00:03:53.425 EAL: No shared files mode enabled, IPC is disabled 00:03:53.425 EAL: Heap on socket 0 was expanded by 34MB 00:03:53.425 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.425 EAL: request: mp_malloc_sync 00:03:53.425 EAL: No shared files mode enabled, IPC is disabled 00:03:53.425 EAL: Heap on socket 0 was shrunk by 34MB 00:03:53.425 EAL: Trying to obtain current memory policy. 00:03:53.425 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.425 EAL: Restoring previous memory policy: 4 00:03:53.425 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.425 EAL: request: mp_malloc_sync 00:03:53.425 EAL: No shared files mode enabled, IPC is disabled 00:03:53.425 EAL: Heap on socket 0 was expanded by 66MB 00:03:53.425 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.425 EAL: request: mp_malloc_sync 00:03:53.425 EAL: No shared files mode enabled, IPC is disabled 00:03:53.425 EAL: Heap on socket 0 was shrunk by 66MB 00:03:53.686 EAL: Trying to obtain current memory policy. 00:03:53.686 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.686 EAL: Restoring previous memory policy: 4 00:03:53.686 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.686 EAL: request: mp_malloc_sync 00:03:53.686 EAL: No shared files mode enabled, IPC is disabled 00:03:53.686 EAL: Heap on socket 0 was expanded by 130MB 00:03:53.686 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.686 EAL: request: mp_malloc_sync 00:03:53.686 EAL: No shared files mode enabled, IPC is disabled 00:03:53.686 EAL: Heap on socket 0 was shrunk by 130MB 00:03:53.947 EAL: Trying to obtain current memory policy. 00:03:53.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.947 EAL: Restoring previous memory policy: 4 00:03:53.947 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.947 EAL: request: mp_malloc_sync 00:03:53.947 EAL: No shared files mode enabled, IPC is disabled 00:03:53.947 EAL: Heap on socket 0 was expanded by 258MB 00:03:54.210 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.210 EAL: request: mp_malloc_sync 00:03:54.210 EAL: No shared files mode enabled, IPC is disabled 00:03:54.210 EAL: Heap on socket 0 was shrunk by 258MB 00:03:54.471 EAL: Trying to obtain current memory policy. 00:03:54.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.471 EAL: Restoring previous memory policy: 4 00:03:54.471 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.471 EAL: request: mp_malloc_sync 00:03:54.471 EAL: No shared files mode enabled, IPC is disabled 00:03:54.471 EAL: Heap on socket 0 was expanded by 514MB 00:03:55.042 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.042 EAL: request: mp_malloc_sync 00:03:55.042 EAL: No shared files mode enabled, IPC is disabled 00:03:55.042 EAL: Heap on socket 0 was shrunk by 514MB 00:03:55.615 EAL: Trying to obtain current memory policy. 00:03:55.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.875 EAL: Restoring previous memory policy: 4 00:03:55.875 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.875 EAL: request: mp_malloc_sync 00:03:55.875 EAL: No shared files mode enabled, IPC is disabled 00:03:55.875 EAL: Heap on socket 0 was expanded by 1026MB 00:03:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.268 EAL: request: mp_malloc_sync 00:03:57.268 EAL: No shared files mode enabled, IPC is disabled 00:03:57.268 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:57.840 passed 00:03:57.840 00:03:57.840 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.840 suites 1 1 n/a 0 0 00:03:57.840 tests 2 2 2 0 0 00:03:57.840 asserts 5558 5558 5558 0 n/a 00:03:57.840 00:03:57.840 Elapsed time = 4.796 seconds 00:03:57.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.840 EAL: request: mp_malloc_sync 00:03:57.840 EAL: No shared files mode enabled, IPC is disabled 00:03:57.840 EAL: Heap on socket 0 was shrunk by 2MB 00:03:57.840 EAL: No shared files mode enabled, IPC is disabled 00:03:57.840 EAL: No shared files mode enabled, IPC is disabled 00:03:57.840 EAL: No shared files mode enabled, IPC is disabled 00:03:57.840 00:03:57.840 real 0m5.047s 00:03:57.840 user 0m4.297s 00:03:57.840 sys 0m0.606s 00:03:57.840 10:31:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:57.840 10:31:28 -- common/autotest_common.sh@10 -- # set +x 00:03:57.840 ************************************ 00:03:57.840 END TEST env_vtophys 00:03:57.840 ************************************ 00:03:57.840 10:31:28 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:57.840 10:31:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.840 10:31:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.840 10:31:28 -- common/autotest_common.sh@10 -- # set +x 00:03:57.840 ************************************ 00:03:57.840 START TEST env_pci 00:03:57.840 ************************************ 00:03:57.840 10:31:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:57.840 00:03:57.840 00:03:57.840 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.840 http://cunit.sourceforge.net/ 00:03:57.840 00:03:57.840 00:03:57.840 Suite: pci 00:03:57.840 Test: pci_hook ...[2024-12-03 10:31:28.397777] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56063 has claimed it 00:03:57.840 passed 00:03:57.840 00:03:57.840 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.840 suites 1 1 n/a 0 0 00:03:57.840 tests 1 1 1 0 0 00:03:57.840 asserts 25 25 25 0 n/a 00:03:57.840 00:03:57.840 Elapsed time = 0.005 seconds 00:03:57.840 EAL: Cannot find device (10000:00:01.0) 00:03:57.840 EAL: Failed to attach device on primary process 00:03:57.840 00:03:57.840 real 0m0.057s 00:03:57.840 user 0m0.029s 00:03:57.840 sys 0m0.028s 00:03:57.840 10:31:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:57.840 10:31:28 -- common/autotest_common.sh@10 -- # set +x 00:03:57.840 ************************************ 00:03:57.840 END TEST env_pci 00:03:57.840 ************************************ 00:03:58.101 10:31:28 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:58.101 10:31:28 -- env/env.sh@15 -- # uname 00:03:58.101 10:31:28 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:58.101 10:31:28 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:58.101 10:31:28 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:58.101 10:31:28 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:03:58.101 10:31:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.101 10:31:28 -- common/autotest_common.sh@10 -- # set +x 00:03:58.101 ************************************ 00:03:58.101 START TEST env_dpdk_post_init 00:03:58.101 ************************************ 00:03:58.101 10:31:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:58.101 EAL: Detected CPU lcores: 10 00:03:58.101 EAL: Detected NUMA nodes: 1 00:03:58.101 EAL: Detected shared linkage of DPDK 00:03:58.101 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:58.101 EAL: Selected IOVA mode 'PA' 00:03:58.101 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:58.101 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:03:58.101 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:03:58.101 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:03:58.101 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:03:58.101 Starting DPDK initialization... 00:03:58.101 Starting SPDK post initialization... 00:03:58.101 SPDK NVMe probe 00:03:58.101 Attaching to 0000:00:06.0 00:03:58.101 Attaching to 0000:00:07.0 00:03:58.101 Attaching to 0000:00:08.0 00:03:58.101 Attaching to 0000:00:09.0 00:03:58.101 Attached to 0000:00:06.0 00:03:58.101 Attached to 0000:00:07.0 00:03:58.101 Attached to 0000:00:09.0 00:03:58.101 Attached to 0000:00:08.0 00:03:58.101 Cleaning up... 00:03:58.101 00:03:58.101 real 0m0.230s 00:03:58.101 user 0m0.067s 00:03:58.101 sys 0m0.065s 00:03:58.101 10:31:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:58.101 ************************************ 00:03:58.101 END TEST env_dpdk_post_init 00:03:58.101 10:31:28 -- common/autotest_common.sh@10 -- # set +x 00:03:58.101 ************************************ 00:03:58.363 10:31:28 -- env/env.sh@26 -- # uname 00:03:58.363 10:31:28 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:58.363 10:31:28 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:58.363 10:31:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.363 10:31:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.363 10:31:28 -- common/autotest_common.sh@10 -- # set +x 00:03:58.363 ************************************ 00:03:58.363 START TEST env_mem_callbacks 00:03:58.363 ************************************ 00:03:58.363 10:31:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:58.363 EAL: Detected CPU lcores: 10 00:03:58.363 EAL: Detected NUMA nodes: 1 00:03:58.363 EAL: Detected shared linkage of DPDK 00:03:58.363 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:58.363 EAL: Selected IOVA mode 'PA' 00:03:58.363 00:03:58.363 00:03:58.363 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.363 http://cunit.sourceforge.net/ 00:03:58.363 00:03:58.363 00:03:58.363 Suite: memory 00:03:58.363 Test: test ... 00:03:58.363 register 0x200000200000 2097152 00:03:58.363 malloc 3145728 00:03:58.363 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:58.363 register 0x200000400000 4194304 00:03:58.363 buf 0x2000004fffc0 len 3145728 PASSED 00:03:58.363 malloc 64 00:03:58.363 buf 0x2000004ffec0 len 64 PASSED 00:03:58.363 malloc 4194304 00:03:58.363 register 0x200000800000 6291456 00:03:58.363 buf 0x2000009fffc0 len 4194304 PASSED 00:03:58.363 free 0x2000004fffc0 3145728 00:03:58.363 free 0x2000004ffec0 64 00:03:58.363 unregister 0x200000400000 4194304 PASSED 00:03:58.363 free 0x2000009fffc0 4194304 00:03:58.363 unregister 0x200000800000 6291456 PASSED 00:03:58.363 malloc 8388608 00:03:58.363 register 0x200000400000 10485760 00:03:58.363 buf 0x2000005fffc0 len 8388608 PASSED 00:03:58.363 free 0x2000005fffc0 8388608 00:03:58.363 unregister 0x200000400000 10485760 PASSED 00:03:58.363 passed 00:03:58.363 00:03:58.363 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.363 suites 1 1 n/a 0 0 00:03:58.363 tests 1 1 1 0 0 00:03:58.363 asserts 15 15 15 0 n/a 00:03:58.363 00:03:58.363 Elapsed time = 0.047 seconds 00:03:58.630 00:03:58.630 real 0m0.238s 00:03:58.630 user 0m0.078s 00:03:58.630 sys 0m0.058s 00:03:58.630 10:31:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:58.630 10:31:28 -- common/autotest_common.sh@10 -- # set +x 00:03:58.630 ************************************ 00:03:58.630 END TEST env_mem_callbacks 00:03:58.630 ************************************ 00:03:58.630 00:03:58.630 real 0m6.177s 00:03:58.630 user 0m4.861s 00:03:58.630 sys 0m0.968s 00:03:58.630 10:31:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:58.630 10:31:29 -- common/autotest_common.sh@10 -- # set +x 00:03:58.630 ************************************ 00:03:58.630 END TEST env 00:03:58.630 ************************************ 00:03:58.630 10:31:29 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:58.630 10:31:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.630 10:31:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.630 10:31:29 -- common/autotest_common.sh@10 -- # set +x 00:03:58.630 ************************************ 00:03:58.630 START TEST rpc 00:03:58.630 ************************************ 00:03:58.630 10:31:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:58.630 * Looking for test storage... 00:03:58.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:58.630 10:31:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:58.630 10:31:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:58.630 10:31:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:58.630 10:31:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:58.630 10:31:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:58.630 10:31:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:58.630 10:31:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:58.630 10:31:29 -- scripts/common.sh@335 -- # IFS=.-: 00:03:58.630 10:31:29 -- scripts/common.sh@335 -- # read -ra ver1 00:03:58.630 10:31:29 -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.630 10:31:29 -- scripts/common.sh@336 -- # read -ra ver2 00:03:58.630 10:31:29 -- scripts/common.sh@337 -- # local 'op=<' 00:03:58.630 10:31:29 -- scripts/common.sh@339 -- # ver1_l=2 00:03:58.630 10:31:29 -- scripts/common.sh@340 -- # ver2_l=1 00:03:58.630 10:31:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:58.630 10:31:29 -- scripts/common.sh@343 -- # case "$op" in 00:03:58.630 10:31:29 -- scripts/common.sh@344 -- # : 1 00:03:58.630 10:31:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:58.630 10:31:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.630 10:31:29 -- scripts/common.sh@364 -- # decimal 1 00:03:58.630 10:31:29 -- scripts/common.sh@352 -- # local d=1 00:03:58.630 10:31:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.630 10:31:29 -- scripts/common.sh@354 -- # echo 1 00:03:58.630 10:31:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:58.630 10:31:29 -- scripts/common.sh@365 -- # decimal 2 00:03:58.630 10:31:29 -- scripts/common.sh@352 -- # local d=2 00:03:58.630 10:31:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.630 10:31:29 -- scripts/common.sh@354 -- # echo 2 00:03:58.630 10:31:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:58.630 10:31:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:58.630 10:31:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:58.630 10:31:29 -- scripts/common.sh@367 -- # return 0 00:03:58.630 10:31:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.630 10:31:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:58.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.630 --rc genhtml_branch_coverage=1 00:03:58.630 --rc genhtml_function_coverage=1 00:03:58.630 --rc genhtml_legend=1 00:03:58.630 --rc geninfo_all_blocks=1 00:03:58.630 --rc geninfo_unexecuted_blocks=1 00:03:58.630 00:03:58.630 ' 00:03:58.630 10:31:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:58.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.630 --rc genhtml_branch_coverage=1 00:03:58.630 --rc genhtml_function_coverage=1 00:03:58.630 --rc genhtml_legend=1 00:03:58.630 --rc geninfo_all_blocks=1 00:03:58.630 --rc geninfo_unexecuted_blocks=1 00:03:58.630 00:03:58.630 ' 00:03:58.630 10:31:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:58.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.630 --rc genhtml_branch_coverage=1 00:03:58.630 --rc genhtml_function_coverage=1 00:03:58.630 --rc genhtml_legend=1 00:03:58.630 --rc geninfo_all_blocks=1 00:03:58.630 --rc geninfo_unexecuted_blocks=1 00:03:58.630 00:03:58.630 ' 00:03:58.630 10:31:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:58.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.630 --rc genhtml_branch_coverage=1 00:03:58.630 --rc genhtml_function_coverage=1 00:03:58.630 --rc genhtml_legend=1 00:03:58.630 --rc geninfo_all_blocks=1 00:03:58.630 --rc geninfo_unexecuted_blocks=1 00:03:58.630 00:03:58.630 ' 00:03:58.630 10:31:29 -- rpc/rpc.sh@65 -- # spdk_pid=56184 00:03:58.630 10:31:29 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.630 10:31:29 -- rpc/rpc.sh@67 -- # waitforlisten 56184 00:03:58.630 10:31:29 -- common/autotest_common.sh@829 -- # '[' -z 56184 ']' 00:03:58.630 10:31:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.630 10:31:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:58.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.630 10:31:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.630 10:31:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:58.630 10:31:29 -- common/autotest_common.sh@10 -- # set +x 00:03:58.630 10:31:29 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:58.890 [2024-12-03 10:31:29.260633] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:58.890 [2024-12-03 10:31:29.260749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56184 ] 00:03:58.890 [2024-12-03 10:31:29.409307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.151 [2024-12-03 10:31:29.583433] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:59.151 [2024-12-03 10:31:29.583627] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:59.151 [2024-12-03 10:31:29.583643] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56184' to capture a snapshot of events at runtime. 00:03:59.151 [2024-12-03 10:31:29.583652] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56184 for offline analysis/debug. 00:03:59.151 [2024-12-03 10:31:29.583683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.537 10:31:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:00.537 10:31:30 -- common/autotest_common.sh@862 -- # return 0 00:04:00.537 10:31:30 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:00.537 10:31:30 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:00.537 10:31:30 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:00.537 10:31:30 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:00.537 10:31:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.537 10:31:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.537 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:04:00.537 ************************************ 00:04:00.537 START TEST rpc_integrity 00:04:00.537 ************************************ 00:04:00.537 10:31:30 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:00.537 10:31:30 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:00.537 10:31:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.537 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:04:00.537 10:31:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.537 10:31:30 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:00.537 10:31:30 -- rpc/rpc.sh@13 -- # jq length 00:04:00.537 10:31:30 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:00.537 10:31:30 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:00.537 10:31:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.537 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:04:00.537 10:31:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.537 10:31:30 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:00.537 10:31:30 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:00.537 10:31:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.537 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:04:00.537 10:31:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.537 10:31:30 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:00.537 { 00:04:00.537 "name": "Malloc0", 00:04:00.537 "aliases": [ 00:04:00.537 "fd304d44-bdd1-4b33-81b5-3e8d3f1803ac" 00:04:00.537 ], 00:04:00.537 "product_name": "Malloc disk", 00:04:00.537 "block_size": 512, 00:04:00.537 "num_blocks": 16384, 00:04:00.537 "uuid": "fd304d44-bdd1-4b33-81b5-3e8d3f1803ac", 00:04:00.537 "assigned_rate_limits": { 00:04:00.537 "rw_ios_per_sec": 0, 00:04:00.537 "rw_mbytes_per_sec": 0, 00:04:00.537 "r_mbytes_per_sec": 0, 00:04:00.537 "w_mbytes_per_sec": 0 00:04:00.537 }, 00:04:00.537 "claimed": false, 00:04:00.537 "zoned": false, 00:04:00.537 "supported_io_types": { 00:04:00.537 "read": true, 00:04:00.537 "write": true, 00:04:00.537 "unmap": true, 00:04:00.537 "write_zeroes": true, 00:04:00.537 "flush": true, 00:04:00.537 "reset": true, 00:04:00.537 "compare": false, 00:04:00.537 "compare_and_write": false, 00:04:00.537 "abort": true, 00:04:00.537 "nvme_admin": false, 00:04:00.537 "nvme_io": false 00:04:00.537 }, 00:04:00.537 "memory_domains": [ 00:04:00.537 { 00:04:00.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.537 "dma_device_type": 2 00:04:00.537 } 00:04:00.537 ], 00:04:00.537 "driver_specific": {} 00:04:00.537 } 00:04:00.537 ]' 00:04:00.537 10:31:30 -- rpc/rpc.sh@17 -- # jq length 00:04:00.537 10:31:30 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:00.537 10:31:30 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:00.537 10:31:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.537 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:04:00.537 [2024-12-03 10:31:30.860343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:00.537 [2024-12-03 10:31:30.860403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:00.537 [2024-12-03 10:31:30.860424] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:04:00.537 [2024-12-03 10:31:30.860436] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:00.537 [2024-12-03 10:31:30.862581] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:00.537 [2024-12-03 10:31:30.862618] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:00.537 Passthru0 00:04:00.537 10:31:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.537 10:31:30 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:00.537 10:31:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.537 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:04:00.537 10:31:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.537 10:31:30 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:00.537 { 00:04:00.537 "name": "Malloc0", 00:04:00.537 "aliases": [ 00:04:00.537 "fd304d44-bdd1-4b33-81b5-3e8d3f1803ac" 00:04:00.537 ], 00:04:00.537 "product_name": "Malloc disk", 00:04:00.537 "block_size": 512, 00:04:00.537 "num_blocks": 16384, 00:04:00.537 "uuid": "fd304d44-bdd1-4b33-81b5-3e8d3f1803ac", 00:04:00.537 "assigned_rate_limits": { 00:04:00.537 "rw_ios_per_sec": 0, 00:04:00.537 "rw_mbytes_per_sec": 0, 00:04:00.537 "r_mbytes_per_sec": 0, 00:04:00.537 "w_mbytes_per_sec": 0 00:04:00.537 }, 00:04:00.537 "claimed": true, 00:04:00.537 "claim_type": "exclusive_write", 00:04:00.537 "zoned": false, 00:04:00.537 "supported_io_types": { 00:04:00.537 "read": true, 00:04:00.537 "write": true, 00:04:00.537 "unmap": true, 00:04:00.537 "write_zeroes": true, 00:04:00.537 "flush": true, 00:04:00.537 "reset": true, 00:04:00.537 "compare": false, 00:04:00.537 "compare_and_write": false, 00:04:00.537 "abort": true, 00:04:00.537 "nvme_admin": false, 00:04:00.537 "nvme_io": false 00:04:00.537 }, 00:04:00.537 "memory_domains": [ 00:04:00.537 { 00:04:00.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.537 "dma_device_type": 2 00:04:00.537 } 00:04:00.537 ], 00:04:00.537 "driver_specific": {} 00:04:00.537 }, 00:04:00.537 { 00:04:00.537 "name": "Passthru0", 00:04:00.537 "aliases": [ 00:04:00.537 "e01031d0-55cf-5853-bdb3-153ef29aea19" 00:04:00.537 ], 00:04:00.537 "product_name": "passthru", 00:04:00.537 "block_size": 512, 00:04:00.537 "num_blocks": 16384, 00:04:00.537 "uuid": "e01031d0-55cf-5853-bdb3-153ef29aea19", 00:04:00.537 "assigned_rate_limits": { 00:04:00.537 "rw_ios_per_sec": 0, 00:04:00.537 "rw_mbytes_per_sec": 0, 00:04:00.537 "r_mbytes_per_sec": 0, 00:04:00.537 "w_mbytes_per_sec": 0 00:04:00.537 }, 00:04:00.537 "claimed": false, 00:04:00.537 "zoned": false, 00:04:00.537 "supported_io_types": { 00:04:00.537 "read": true, 00:04:00.537 "write": true, 00:04:00.537 "unmap": true, 00:04:00.537 "write_zeroes": true, 00:04:00.537 "flush": true, 00:04:00.537 "reset": true, 00:04:00.537 "compare": false, 00:04:00.537 "compare_and_write": false, 00:04:00.537 "abort": true, 00:04:00.537 "nvme_admin": false, 00:04:00.537 "nvme_io": false 00:04:00.537 }, 00:04:00.537 "memory_domains": [ 00:04:00.537 { 00:04:00.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.537 "dma_device_type": 2 00:04:00.537 } 00:04:00.537 ], 00:04:00.537 "driver_specific": { 00:04:00.537 "passthru": { 00:04:00.537 "name": "Passthru0", 00:04:00.537 "base_bdev_name": "Malloc0" 00:04:00.537 } 00:04:00.537 } 00:04:00.537 } 00:04:00.537 ]' 00:04:00.537 10:31:30 -- rpc/rpc.sh@21 -- # jq length 00:04:00.537 10:31:30 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:00.537 10:31:30 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:00.537 10:31:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.537 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:04:00.537 10:31:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.537 10:31:30 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:00.537 10:31:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.537 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:04:00.537 10:31:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.537 10:31:30 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:00.537 10:31:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.537 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:04:00.537 10:31:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.537 10:31:30 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:00.537 10:31:30 -- rpc/rpc.sh@26 -- # jq length 00:04:00.537 10:31:30 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:00.537 00:04:00.537 real 0m0.233s 00:04:00.537 user 0m0.130s 00:04:00.537 sys 0m0.028s 00:04:00.537 10:31:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:00.537 ************************************ 00:04:00.537 END TEST rpc_integrity 00:04:00.537 ************************************ 00:04:00.537 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:04:00.537 10:31:31 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:00.537 10:31:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.537 10:31:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.537 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:00.538 ************************************ 00:04:00.538 START TEST rpc_plugins 00:04:00.538 ************************************ 00:04:00.538 10:31:31 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:00.538 10:31:31 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:00.538 10:31:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.538 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:00.538 10:31:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.538 10:31:31 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:00.538 10:31:31 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:00.538 10:31:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.538 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:00.538 10:31:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.538 10:31:31 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:00.538 { 00:04:00.538 "name": "Malloc1", 00:04:00.538 "aliases": [ 00:04:00.538 "409161a7-27ac-4844-9ec6-5beacb932192" 00:04:00.538 ], 00:04:00.538 "product_name": "Malloc disk", 00:04:00.538 "block_size": 4096, 00:04:00.538 "num_blocks": 256, 00:04:00.538 "uuid": "409161a7-27ac-4844-9ec6-5beacb932192", 00:04:00.538 "assigned_rate_limits": { 00:04:00.538 "rw_ios_per_sec": 0, 00:04:00.538 "rw_mbytes_per_sec": 0, 00:04:00.538 "r_mbytes_per_sec": 0, 00:04:00.538 "w_mbytes_per_sec": 0 00:04:00.538 }, 00:04:00.538 "claimed": false, 00:04:00.538 "zoned": false, 00:04:00.538 "supported_io_types": { 00:04:00.538 "read": true, 00:04:00.538 "write": true, 00:04:00.538 "unmap": true, 00:04:00.538 "write_zeroes": true, 00:04:00.538 "flush": true, 00:04:00.538 "reset": true, 00:04:00.538 "compare": false, 00:04:00.538 "compare_and_write": false, 00:04:00.538 "abort": true, 00:04:00.538 "nvme_admin": false, 00:04:00.538 "nvme_io": false 00:04:00.538 }, 00:04:00.538 "memory_domains": [ 00:04:00.538 { 00:04:00.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.538 "dma_device_type": 2 00:04:00.538 } 00:04:00.538 ], 00:04:00.538 "driver_specific": {} 00:04:00.538 } 00:04:00.538 ]' 00:04:00.538 10:31:31 -- rpc/rpc.sh@32 -- # jq length 00:04:00.538 10:31:31 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:00.538 10:31:31 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:00.538 10:31:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.538 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:00.538 10:31:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.538 10:31:31 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:00.538 10:31:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.538 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:00.538 10:31:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.538 10:31:31 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:00.538 10:31:31 -- rpc/rpc.sh@36 -- # jq length 00:04:00.799 10:31:31 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:00.799 00:04:00.799 real 0m0.114s 00:04:00.799 user 0m0.066s 00:04:00.799 sys 0m0.014s 00:04:00.799 10:31:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:00.799 ************************************ 00:04:00.799 END TEST rpc_plugins 00:04:00.799 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:00.799 ************************************ 00:04:00.799 10:31:31 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:00.799 10:31:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.799 10:31:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.799 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:00.799 ************************************ 00:04:00.799 START TEST rpc_trace_cmd_test 00:04:00.799 ************************************ 00:04:00.799 10:31:31 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:00.799 10:31:31 -- rpc/rpc.sh@40 -- # local info 00:04:00.799 10:31:31 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:00.799 10:31:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.799 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:00.799 10:31:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.799 10:31:31 -- rpc/rpc.sh@42 -- # info='{ 00:04:00.799 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56184", 00:04:00.799 "tpoint_group_mask": "0x8", 00:04:00.799 "iscsi_conn": { 00:04:00.799 "mask": "0x2", 00:04:00.799 "tpoint_mask": "0x0" 00:04:00.799 }, 00:04:00.799 "scsi": { 00:04:00.799 "mask": "0x4", 00:04:00.799 "tpoint_mask": "0x0" 00:04:00.799 }, 00:04:00.799 "bdev": { 00:04:00.799 "mask": "0x8", 00:04:00.799 "tpoint_mask": "0xffffffffffffffff" 00:04:00.799 }, 00:04:00.799 "nvmf_rdma": { 00:04:00.799 "mask": "0x10", 00:04:00.799 "tpoint_mask": "0x0" 00:04:00.799 }, 00:04:00.799 "nvmf_tcp": { 00:04:00.799 "mask": "0x20", 00:04:00.799 "tpoint_mask": "0x0" 00:04:00.799 }, 00:04:00.799 "ftl": { 00:04:00.799 "mask": "0x40", 00:04:00.799 "tpoint_mask": "0x0" 00:04:00.799 }, 00:04:00.799 "blobfs": { 00:04:00.799 "mask": "0x80", 00:04:00.799 "tpoint_mask": "0x0" 00:04:00.799 }, 00:04:00.799 "dsa": { 00:04:00.799 "mask": "0x200", 00:04:00.799 "tpoint_mask": "0x0" 00:04:00.799 }, 00:04:00.799 "thread": { 00:04:00.799 "mask": "0x400", 00:04:00.799 "tpoint_mask": "0x0" 00:04:00.799 }, 00:04:00.799 "nvme_pcie": { 00:04:00.799 "mask": "0x800", 00:04:00.799 "tpoint_mask": "0x0" 00:04:00.799 }, 00:04:00.799 "iaa": { 00:04:00.799 "mask": "0x1000", 00:04:00.799 "tpoint_mask": "0x0" 00:04:00.799 }, 00:04:00.799 "nvme_tcp": { 00:04:00.799 "mask": "0x2000", 00:04:00.799 "tpoint_mask": "0x0" 00:04:00.799 }, 00:04:00.799 "bdev_nvme": { 00:04:00.799 "mask": "0x4000", 00:04:00.799 "tpoint_mask": "0x0" 00:04:00.799 } 00:04:00.799 }' 00:04:00.799 10:31:31 -- rpc/rpc.sh@43 -- # jq length 00:04:00.799 10:31:31 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:00.799 10:31:31 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:00.799 10:31:31 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:00.799 10:31:31 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:00.799 10:31:31 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:00.799 10:31:31 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:00.799 10:31:31 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:00.799 10:31:31 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:00.799 10:31:31 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:00.799 00:04:00.799 real 0m0.172s 00:04:00.799 user 0m0.127s 00:04:00.799 sys 0m0.026s 00:04:00.799 10:31:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:00.799 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:00.799 ************************************ 00:04:00.799 END TEST rpc_trace_cmd_test 00:04:00.799 ************************************ 00:04:00.799 10:31:31 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:00.799 10:31:31 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:00.799 10:31:31 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:00.799 10:31:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.799 10:31:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.799 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:00.799 ************************************ 00:04:00.799 START TEST rpc_daemon_integrity 00:04:00.799 ************************************ 00:04:00.799 10:31:31 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:00.799 10:31:31 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:00.799 10:31:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.799 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:01.060 10:31:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:01.060 10:31:31 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:01.060 10:31:31 -- rpc/rpc.sh@13 -- # jq length 00:04:01.060 10:31:31 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:01.060 10:31:31 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:01.060 10:31:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:01.060 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:01.060 10:31:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:01.060 10:31:31 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:01.060 10:31:31 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:01.060 10:31:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:01.060 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:01.060 10:31:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:01.060 10:31:31 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:01.060 { 00:04:01.060 "name": "Malloc2", 00:04:01.060 "aliases": [ 00:04:01.060 "6c768be4-7110-44d5-ba7f-35af5adf4c69" 00:04:01.060 ], 00:04:01.060 "product_name": "Malloc disk", 00:04:01.060 "block_size": 512, 00:04:01.060 "num_blocks": 16384, 00:04:01.060 "uuid": "6c768be4-7110-44d5-ba7f-35af5adf4c69", 00:04:01.060 "assigned_rate_limits": { 00:04:01.060 "rw_ios_per_sec": 0, 00:04:01.060 "rw_mbytes_per_sec": 0, 00:04:01.060 "r_mbytes_per_sec": 0, 00:04:01.060 "w_mbytes_per_sec": 0 00:04:01.060 }, 00:04:01.060 "claimed": false, 00:04:01.060 "zoned": false, 00:04:01.060 "supported_io_types": { 00:04:01.060 "read": true, 00:04:01.060 "write": true, 00:04:01.060 "unmap": true, 00:04:01.060 "write_zeroes": true, 00:04:01.060 "flush": true, 00:04:01.060 "reset": true, 00:04:01.060 "compare": false, 00:04:01.060 "compare_and_write": false, 00:04:01.060 "abort": true, 00:04:01.060 "nvme_admin": false, 00:04:01.060 "nvme_io": false 00:04:01.060 }, 00:04:01.060 "memory_domains": [ 00:04:01.060 { 00:04:01.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.060 "dma_device_type": 2 00:04:01.060 } 00:04:01.060 ], 00:04:01.060 "driver_specific": {} 00:04:01.060 } 00:04:01.060 ]' 00:04:01.060 10:31:31 -- rpc/rpc.sh@17 -- # jq length 00:04:01.060 10:31:31 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:01.060 10:31:31 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:01.060 10:31:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:01.060 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:01.060 [2024-12-03 10:31:31.507682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:01.060 [2024-12-03 10:31:31.507735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:01.060 [2024-12-03 10:31:31.507753] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:04:01.060 [2024-12-03 10:31:31.507763] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:01.060 [2024-12-03 10:31:31.509799] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:01.060 [2024-12-03 10:31:31.509834] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:01.060 Passthru0 00:04:01.060 10:31:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:01.060 10:31:31 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:01.060 10:31:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:01.060 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:01.060 10:31:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:01.060 10:31:31 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:01.060 { 00:04:01.060 "name": "Malloc2", 00:04:01.060 "aliases": [ 00:04:01.060 "6c768be4-7110-44d5-ba7f-35af5adf4c69" 00:04:01.060 ], 00:04:01.060 "product_name": "Malloc disk", 00:04:01.060 "block_size": 512, 00:04:01.060 "num_blocks": 16384, 00:04:01.060 "uuid": "6c768be4-7110-44d5-ba7f-35af5adf4c69", 00:04:01.060 "assigned_rate_limits": { 00:04:01.060 "rw_ios_per_sec": 0, 00:04:01.060 "rw_mbytes_per_sec": 0, 00:04:01.060 "r_mbytes_per_sec": 0, 00:04:01.060 "w_mbytes_per_sec": 0 00:04:01.060 }, 00:04:01.060 "claimed": true, 00:04:01.060 "claim_type": "exclusive_write", 00:04:01.060 "zoned": false, 00:04:01.060 "supported_io_types": { 00:04:01.060 "read": true, 00:04:01.060 "write": true, 00:04:01.060 "unmap": true, 00:04:01.060 "write_zeroes": true, 00:04:01.060 "flush": true, 00:04:01.060 "reset": true, 00:04:01.060 "compare": false, 00:04:01.060 "compare_and_write": false, 00:04:01.060 "abort": true, 00:04:01.060 "nvme_admin": false, 00:04:01.060 "nvme_io": false 00:04:01.060 }, 00:04:01.060 "memory_domains": [ 00:04:01.060 { 00:04:01.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.060 "dma_device_type": 2 00:04:01.060 } 00:04:01.060 ], 00:04:01.060 "driver_specific": {} 00:04:01.060 }, 00:04:01.060 { 00:04:01.060 "name": "Passthru0", 00:04:01.060 "aliases": [ 00:04:01.060 "a8dbed67-f2a6-56bd-92bd-f1f35613e6ba" 00:04:01.060 ], 00:04:01.060 "product_name": "passthru", 00:04:01.060 "block_size": 512, 00:04:01.060 "num_blocks": 16384, 00:04:01.060 "uuid": "a8dbed67-f2a6-56bd-92bd-f1f35613e6ba", 00:04:01.060 "assigned_rate_limits": { 00:04:01.060 "rw_ios_per_sec": 0, 00:04:01.060 "rw_mbytes_per_sec": 0, 00:04:01.060 "r_mbytes_per_sec": 0, 00:04:01.060 "w_mbytes_per_sec": 0 00:04:01.060 }, 00:04:01.060 "claimed": false, 00:04:01.060 "zoned": false, 00:04:01.060 "supported_io_types": { 00:04:01.060 "read": true, 00:04:01.060 "write": true, 00:04:01.060 "unmap": true, 00:04:01.060 "write_zeroes": true, 00:04:01.060 "flush": true, 00:04:01.060 "reset": true, 00:04:01.060 "compare": false, 00:04:01.060 "compare_and_write": false, 00:04:01.060 "abort": true, 00:04:01.060 "nvme_admin": false, 00:04:01.060 "nvme_io": false 00:04:01.060 }, 00:04:01.060 "memory_domains": [ 00:04:01.060 { 00:04:01.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.060 "dma_device_type": 2 00:04:01.060 } 00:04:01.060 ], 00:04:01.060 "driver_specific": { 00:04:01.060 "passthru": { 00:04:01.060 "name": "Passthru0", 00:04:01.060 "base_bdev_name": "Malloc2" 00:04:01.060 } 00:04:01.060 } 00:04:01.060 } 00:04:01.060 ]' 00:04:01.060 10:31:31 -- rpc/rpc.sh@21 -- # jq length 00:04:01.060 10:31:31 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:01.060 10:31:31 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:01.060 10:31:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:01.060 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:01.060 10:31:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:01.060 10:31:31 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:01.060 10:31:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:01.060 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:01.060 10:31:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:01.060 10:31:31 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:01.060 10:31:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:01.060 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:01.060 10:31:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:01.060 10:31:31 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:01.060 10:31:31 -- rpc/rpc.sh@26 -- # jq length 00:04:01.060 10:31:31 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:01.060 00:04:01.060 real 0m0.235s 00:04:01.060 user 0m0.127s 00:04:01.060 sys 0m0.025s 00:04:01.060 10:31:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.060 10:31:31 -- common/autotest_common.sh@10 -- # set +x 00:04:01.060 ************************************ 00:04:01.061 END TEST rpc_daemon_integrity 00:04:01.061 ************************************ 00:04:01.061 10:31:31 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:01.061 10:31:31 -- rpc/rpc.sh@84 -- # killprocess 56184 00:04:01.061 10:31:31 -- common/autotest_common.sh@936 -- # '[' -z 56184 ']' 00:04:01.061 10:31:31 -- common/autotest_common.sh@940 -- # kill -0 56184 00:04:01.061 10:31:31 -- common/autotest_common.sh@941 -- # uname 00:04:01.321 10:31:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:01.321 10:31:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56184 00:04:01.321 10:31:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:01.321 10:31:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:01.321 killing process with pid 56184 00:04:01.321 10:31:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56184' 00:04:01.321 10:31:31 -- common/autotest_common.sh@955 -- # kill 56184 00:04:01.321 10:31:31 -- common/autotest_common.sh@960 -- # wait 56184 00:04:02.708 00:04:02.708 real 0m3.865s 00:04:02.708 user 0m4.437s 00:04:02.708 sys 0m0.571s 00:04:02.708 10:31:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:02.708 10:31:32 -- common/autotest_common.sh@10 -- # set +x 00:04:02.708 ************************************ 00:04:02.708 END TEST rpc 00:04:02.708 ************************************ 00:04:02.708 10:31:32 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:02.708 10:31:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.708 10:31:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.708 10:31:32 -- common/autotest_common.sh@10 -- # set +x 00:04:02.708 ************************************ 00:04:02.708 START TEST rpc_client 00:04:02.708 ************************************ 00:04:02.708 10:31:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:02.708 * Looking for test storage... 00:04:02.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:02.708 10:31:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:02.708 10:31:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:02.708 10:31:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:02.708 10:31:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:02.708 10:31:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:02.708 10:31:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:02.708 10:31:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:02.708 10:31:33 -- scripts/common.sh@335 -- # IFS=.-: 00:04:02.708 10:31:33 -- scripts/common.sh@335 -- # read -ra ver1 00:04:02.708 10:31:33 -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.708 10:31:33 -- scripts/common.sh@336 -- # read -ra ver2 00:04:02.708 10:31:33 -- scripts/common.sh@337 -- # local 'op=<' 00:04:02.708 10:31:33 -- scripts/common.sh@339 -- # ver1_l=2 00:04:02.708 10:31:33 -- scripts/common.sh@340 -- # ver2_l=1 00:04:02.708 10:31:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:02.708 10:31:33 -- scripts/common.sh@343 -- # case "$op" in 00:04:02.708 10:31:33 -- scripts/common.sh@344 -- # : 1 00:04:02.708 10:31:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:02.708 10:31:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.708 10:31:33 -- scripts/common.sh@364 -- # decimal 1 00:04:02.708 10:31:33 -- scripts/common.sh@352 -- # local d=1 00:04:02.708 10:31:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.708 10:31:33 -- scripts/common.sh@354 -- # echo 1 00:04:02.708 10:31:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:02.708 10:31:33 -- scripts/common.sh@365 -- # decimal 2 00:04:02.708 10:31:33 -- scripts/common.sh@352 -- # local d=2 00:04:02.708 10:31:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.708 10:31:33 -- scripts/common.sh@354 -- # echo 2 00:04:02.708 10:31:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:02.708 10:31:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:02.709 10:31:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:02.709 10:31:33 -- scripts/common.sh@367 -- # return 0 00:04:02.709 10:31:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.709 10:31:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:02.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.709 --rc genhtml_branch_coverage=1 00:04:02.709 --rc genhtml_function_coverage=1 00:04:02.709 --rc genhtml_legend=1 00:04:02.709 --rc geninfo_all_blocks=1 00:04:02.709 --rc geninfo_unexecuted_blocks=1 00:04:02.709 00:04:02.709 ' 00:04:02.709 10:31:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:02.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.709 --rc genhtml_branch_coverage=1 00:04:02.709 --rc genhtml_function_coverage=1 00:04:02.709 --rc genhtml_legend=1 00:04:02.709 --rc geninfo_all_blocks=1 00:04:02.709 --rc geninfo_unexecuted_blocks=1 00:04:02.709 00:04:02.709 ' 00:04:02.709 10:31:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:02.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.709 --rc genhtml_branch_coverage=1 00:04:02.709 --rc genhtml_function_coverage=1 00:04:02.709 --rc genhtml_legend=1 00:04:02.709 --rc geninfo_all_blocks=1 00:04:02.709 --rc geninfo_unexecuted_blocks=1 00:04:02.709 00:04:02.709 ' 00:04:02.709 10:31:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:02.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.709 --rc genhtml_branch_coverage=1 00:04:02.709 --rc genhtml_function_coverage=1 00:04:02.709 --rc genhtml_legend=1 00:04:02.709 --rc geninfo_all_blocks=1 00:04:02.709 --rc geninfo_unexecuted_blocks=1 00:04:02.709 00:04:02.709 ' 00:04:02.709 10:31:33 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:02.709 OK 00:04:02.709 10:31:33 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:02.709 00:04:02.709 real 0m0.175s 00:04:02.709 user 0m0.109s 00:04:02.709 sys 0m0.076s 00:04:02.709 10:31:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:02.709 ************************************ 00:04:02.709 END TEST rpc_client 00:04:02.709 10:31:33 -- common/autotest_common.sh@10 -- # set +x 00:04:02.709 ************************************ 00:04:02.709 10:31:33 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:02.709 10:31:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.709 10:31:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.709 10:31:33 -- common/autotest_common.sh@10 -- # set +x 00:04:02.709 ************************************ 00:04:02.709 START TEST json_config 00:04:02.709 ************************************ 00:04:02.709 10:31:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:02.709 10:31:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:02.709 10:31:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:02.709 10:31:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:02.709 10:31:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:02.709 10:31:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:02.709 10:31:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:02.709 10:31:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:02.709 10:31:33 -- scripts/common.sh@335 -- # IFS=.-: 00:04:02.709 10:31:33 -- scripts/common.sh@335 -- # read -ra ver1 00:04:02.709 10:31:33 -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.709 10:31:33 -- scripts/common.sh@336 -- # read -ra ver2 00:04:02.709 10:31:33 -- scripts/common.sh@337 -- # local 'op=<' 00:04:02.709 10:31:33 -- scripts/common.sh@339 -- # ver1_l=2 00:04:02.709 10:31:33 -- scripts/common.sh@340 -- # ver2_l=1 00:04:02.709 10:31:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:02.709 10:31:33 -- scripts/common.sh@343 -- # case "$op" in 00:04:02.709 10:31:33 -- scripts/common.sh@344 -- # : 1 00:04:02.709 10:31:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:02.709 10:31:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.709 10:31:33 -- scripts/common.sh@364 -- # decimal 1 00:04:02.709 10:31:33 -- scripts/common.sh@352 -- # local d=1 00:04:02.709 10:31:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.709 10:31:33 -- scripts/common.sh@354 -- # echo 1 00:04:02.709 10:31:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:02.709 10:31:33 -- scripts/common.sh@365 -- # decimal 2 00:04:02.709 10:31:33 -- scripts/common.sh@352 -- # local d=2 00:04:02.709 10:31:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.709 10:31:33 -- scripts/common.sh@354 -- # echo 2 00:04:02.709 10:31:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:02.709 10:31:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:02.709 10:31:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:02.709 10:31:33 -- scripts/common.sh@367 -- # return 0 00:04:02.709 10:31:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.709 10:31:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:02.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.709 --rc genhtml_branch_coverage=1 00:04:02.709 --rc genhtml_function_coverage=1 00:04:02.709 --rc genhtml_legend=1 00:04:02.709 --rc geninfo_all_blocks=1 00:04:02.709 --rc geninfo_unexecuted_blocks=1 00:04:02.709 00:04:02.709 ' 00:04:02.709 10:31:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:02.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.709 --rc genhtml_branch_coverage=1 00:04:02.709 --rc genhtml_function_coverage=1 00:04:02.709 --rc genhtml_legend=1 00:04:02.709 --rc geninfo_all_blocks=1 00:04:02.709 --rc geninfo_unexecuted_blocks=1 00:04:02.709 00:04:02.709 ' 00:04:02.709 10:31:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:02.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.709 --rc genhtml_branch_coverage=1 00:04:02.709 --rc genhtml_function_coverage=1 00:04:02.709 --rc genhtml_legend=1 00:04:02.709 --rc geninfo_all_blocks=1 00:04:02.709 --rc geninfo_unexecuted_blocks=1 00:04:02.709 00:04:02.709 ' 00:04:02.709 10:31:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:02.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.709 --rc genhtml_branch_coverage=1 00:04:02.709 --rc genhtml_function_coverage=1 00:04:02.709 --rc genhtml_legend=1 00:04:02.709 --rc geninfo_all_blocks=1 00:04:02.709 --rc geninfo_unexecuted_blocks=1 00:04:02.709 00:04:02.709 ' 00:04:02.709 10:31:33 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:02.709 10:31:33 -- nvmf/common.sh@7 -- # uname -s 00:04:02.709 10:31:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:02.709 10:31:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:02.709 10:31:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:02.709 10:31:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:02.709 10:31:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:02.709 10:31:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:02.709 10:31:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:02.709 10:31:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:02.709 10:31:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:02.709 10:31:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:02.709 10:31:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15b22541-b866-4b77-a57b-12205cff22be 00:04:02.709 10:31:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=15b22541-b866-4b77-a57b-12205cff22be 00:04:02.709 10:31:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:02.709 10:31:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:02.709 10:31:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:02.709 10:31:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:02.709 10:31:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:02.709 10:31:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.709 10:31:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.709 10:31:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.709 10:31:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.709 10:31:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.709 10:31:33 -- paths/export.sh@5 -- # export PATH 00:04:02.709 10:31:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.709 10:31:33 -- nvmf/common.sh@46 -- # : 0 00:04:02.709 10:31:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:02.709 10:31:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:02.709 10:31:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:02.709 10:31:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:02.709 10:31:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:02.709 10:31:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:02.709 10:31:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:02.709 10:31:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:02.709 10:31:33 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:02.709 10:31:33 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:02.709 10:31:33 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:02.709 10:31:33 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:02.710 WARNING: No tests are enabled so not running JSON configuration tests 00:04:02.710 10:31:33 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:02.710 10:31:33 -- json_config/json_config.sh@27 -- # exit 0 00:04:02.710 00:04:02.710 real 0m0.124s 00:04:02.710 user 0m0.067s 00:04:02.710 sys 0m0.054s 00:04:02.710 10:31:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:02.710 10:31:33 -- common/autotest_common.sh@10 -- # set +x 00:04:02.710 ************************************ 00:04:02.710 END TEST json_config 00:04:02.710 ************************************ 00:04:02.972 10:31:33 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:02.972 10:31:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.972 10:31:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.972 10:31:33 -- common/autotest_common.sh@10 -- # set +x 00:04:02.972 ************************************ 00:04:02.972 START TEST json_config_extra_key 00:04:02.972 ************************************ 00:04:02.972 10:31:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:02.972 10:31:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:02.972 10:31:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:02.972 10:31:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:02.972 10:31:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:02.972 10:31:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:02.972 10:31:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:02.972 10:31:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:02.972 10:31:33 -- scripts/common.sh@335 -- # IFS=.-: 00:04:02.972 10:31:33 -- scripts/common.sh@335 -- # read -ra ver1 00:04:02.972 10:31:33 -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.972 10:31:33 -- scripts/common.sh@336 -- # read -ra ver2 00:04:02.972 10:31:33 -- scripts/common.sh@337 -- # local 'op=<' 00:04:02.972 10:31:33 -- scripts/common.sh@339 -- # ver1_l=2 00:04:02.972 10:31:33 -- scripts/common.sh@340 -- # ver2_l=1 00:04:02.972 10:31:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:02.972 10:31:33 -- scripts/common.sh@343 -- # case "$op" in 00:04:02.972 10:31:33 -- scripts/common.sh@344 -- # : 1 00:04:02.972 10:31:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:02.972 10:31:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.972 10:31:33 -- scripts/common.sh@364 -- # decimal 1 00:04:02.972 10:31:33 -- scripts/common.sh@352 -- # local d=1 00:04:02.972 10:31:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.972 10:31:33 -- scripts/common.sh@354 -- # echo 1 00:04:02.972 10:31:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:02.972 10:31:33 -- scripts/common.sh@365 -- # decimal 2 00:04:02.972 10:31:33 -- scripts/common.sh@352 -- # local d=2 00:04:02.972 10:31:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.972 10:31:33 -- scripts/common.sh@354 -- # echo 2 00:04:02.972 10:31:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:02.972 10:31:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:02.972 10:31:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:02.972 10:31:33 -- scripts/common.sh@367 -- # return 0 00:04:02.972 10:31:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.972 10:31:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:02.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.972 --rc genhtml_branch_coverage=1 00:04:02.972 --rc genhtml_function_coverage=1 00:04:02.972 --rc genhtml_legend=1 00:04:02.972 --rc geninfo_all_blocks=1 00:04:02.973 --rc geninfo_unexecuted_blocks=1 00:04:02.973 00:04:02.973 ' 00:04:02.973 10:31:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:02.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.973 --rc genhtml_branch_coverage=1 00:04:02.973 --rc genhtml_function_coverage=1 00:04:02.973 --rc genhtml_legend=1 00:04:02.973 --rc geninfo_all_blocks=1 00:04:02.973 --rc geninfo_unexecuted_blocks=1 00:04:02.973 00:04:02.973 ' 00:04:02.973 10:31:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:02.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.973 --rc genhtml_branch_coverage=1 00:04:02.973 --rc genhtml_function_coverage=1 00:04:02.973 --rc genhtml_legend=1 00:04:02.973 --rc geninfo_all_blocks=1 00:04:02.973 --rc geninfo_unexecuted_blocks=1 00:04:02.973 00:04:02.973 ' 00:04:02.973 10:31:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:02.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.973 --rc genhtml_branch_coverage=1 00:04:02.973 --rc genhtml_function_coverage=1 00:04:02.973 --rc genhtml_legend=1 00:04:02.973 --rc geninfo_all_blocks=1 00:04:02.973 --rc geninfo_unexecuted_blocks=1 00:04:02.973 00:04:02.973 ' 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:02.973 10:31:33 -- nvmf/common.sh@7 -- # uname -s 00:04:02.973 10:31:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:02.973 10:31:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:02.973 10:31:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:02.973 10:31:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:02.973 10:31:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:02.973 10:31:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:02.973 10:31:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:02.973 10:31:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:02.973 10:31:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:02.973 10:31:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:02.973 10:31:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15b22541-b866-4b77-a57b-12205cff22be 00:04:02.973 10:31:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=15b22541-b866-4b77-a57b-12205cff22be 00:04:02.973 10:31:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:02.973 10:31:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:02.973 10:31:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:02.973 10:31:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:02.973 10:31:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:02.973 10:31:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.973 10:31:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.973 10:31:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.973 10:31:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.973 10:31:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.973 10:31:33 -- paths/export.sh@5 -- # export PATH 00:04:02.973 10:31:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.973 10:31:33 -- nvmf/common.sh@46 -- # : 0 00:04:02.973 10:31:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:02.973 10:31:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:02.973 10:31:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:02.973 10:31:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:02.973 10:31:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:02.973 10:31:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:02.973 10:31:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:02.973 10:31:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:02.973 INFO: launching applications... 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56496 00:04:02.973 Waiting for target to run... 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56496 /var/tmp/spdk_tgt.sock 00:04:02.973 10:31:33 -- common/autotest_common.sh@829 -- # '[' -z 56496 ']' 00:04:02.973 10:31:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:02.973 10:31:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:02.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:02.973 10:31:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:02.973 10:31:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:02.973 10:31:33 -- common/autotest_common.sh@10 -- # set +x 00:04:02.973 10:31:33 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:02.973 [2024-12-03 10:31:33.498030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:02.973 [2024-12-03 10:31:33.498127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56496 ] 00:04:03.235 [2024-12-03 10:31:33.803003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.497 [2024-12-03 10:31:33.980700] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:03.497 [2024-12-03 10:31:33.980929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.466 10:31:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:04.466 10:31:34 -- common/autotest_common.sh@862 -- # return 0 00:04:04.466 10:31:34 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:04.466 00:04:04.466 INFO: shutting down applications... 00:04:04.466 10:31:34 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:04.466 10:31:34 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:04.466 10:31:34 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:04.466 10:31:34 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:04.466 10:31:34 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56496 ]] 00:04:04.466 10:31:34 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56496 00:04:04.466 10:31:34 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:04.466 10:31:34 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:04.466 10:31:34 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56496 00:04:04.466 10:31:34 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:05.053 10:31:35 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:05.053 10:31:35 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:05.053 10:31:35 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56496 00:04:05.053 10:31:35 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:05.625 10:31:35 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:05.625 10:31:35 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:05.625 10:31:35 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56496 00:04:05.625 10:31:35 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:05.887 10:31:36 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:05.887 10:31:36 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:05.887 10:31:36 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56496 00:04:05.887 10:31:36 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:06.461 10:31:36 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:06.461 10:31:36 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:06.461 10:31:36 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56496 00:04:06.461 10:31:36 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:06.461 10:31:36 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:06.461 SPDK target shutdown done 00:04:06.461 10:31:36 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:06.461 10:31:36 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:06.461 Success 00:04:06.461 10:31:36 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:06.461 ************************************ 00:04:06.461 END TEST json_config_extra_key 00:04:06.461 ************************************ 00:04:06.461 00:04:06.461 real 0m3.670s 00:04:06.461 user 0m3.369s 00:04:06.461 sys 0m0.383s 00:04:06.461 10:31:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:06.461 10:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:06.461 10:31:37 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:06.461 10:31:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.461 10:31:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.461 10:31:37 -- common/autotest_common.sh@10 -- # set +x 00:04:06.461 ************************************ 00:04:06.461 START TEST alias_rpc 00:04:06.461 ************************************ 00:04:06.461 10:31:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:06.723 * Looking for test storage... 00:04:06.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:06.723 10:31:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:06.723 10:31:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:06.723 10:31:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:06.723 10:31:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:06.723 10:31:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:06.723 10:31:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:06.723 10:31:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:06.723 10:31:37 -- scripts/common.sh@335 -- # IFS=.-: 00:04:06.723 10:31:37 -- scripts/common.sh@335 -- # read -ra ver1 00:04:06.723 10:31:37 -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.723 10:31:37 -- scripts/common.sh@336 -- # read -ra ver2 00:04:06.723 10:31:37 -- scripts/common.sh@337 -- # local 'op=<' 00:04:06.723 10:31:37 -- scripts/common.sh@339 -- # ver1_l=2 00:04:06.723 10:31:37 -- scripts/common.sh@340 -- # ver2_l=1 00:04:06.723 10:31:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:06.723 10:31:37 -- scripts/common.sh@343 -- # case "$op" in 00:04:06.723 10:31:37 -- scripts/common.sh@344 -- # : 1 00:04:06.723 10:31:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:06.723 10:31:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.723 10:31:37 -- scripts/common.sh@364 -- # decimal 1 00:04:06.723 10:31:37 -- scripts/common.sh@352 -- # local d=1 00:04:06.723 10:31:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.723 10:31:37 -- scripts/common.sh@354 -- # echo 1 00:04:06.723 10:31:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:06.723 10:31:37 -- scripts/common.sh@365 -- # decimal 2 00:04:06.723 10:31:37 -- scripts/common.sh@352 -- # local d=2 00:04:06.723 10:31:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.723 10:31:37 -- scripts/common.sh@354 -- # echo 2 00:04:06.723 10:31:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:06.723 10:31:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:06.723 10:31:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:06.723 10:31:37 -- scripts/common.sh@367 -- # return 0 00:04:06.723 10:31:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.723 10:31:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:06.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.723 --rc genhtml_branch_coverage=1 00:04:06.723 --rc genhtml_function_coverage=1 00:04:06.723 --rc genhtml_legend=1 00:04:06.723 --rc geninfo_all_blocks=1 00:04:06.723 --rc geninfo_unexecuted_blocks=1 00:04:06.723 00:04:06.723 ' 00:04:06.723 10:31:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:06.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.723 --rc genhtml_branch_coverage=1 00:04:06.723 --rc genhtml_function_coverage=1 00:04:06.723 --rc genhtml_legend=1 00:04:06.724 --rc geninfo_all_blocks=1 00:04:06.724 --rc geninfo_unexecuted_blocks=1 00:04:06.724 00:04:06.724 ' 00:04:06.724 10:31:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:06.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.724 --rc genhtml_branch_coverage=1 00:04:06.724 --rc genhtml_function_coverage=1 00:04:06.724 --rc genhtml_legend=1 00:04:06.724 --rc geninfo_all_blocks=1 00:04:06.724 --rc geninfo_unexecuted_blocks=1 00:04:06.724 00:04:06.724 ' 00:04:06.724 10:31:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:06.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.724 --rc genhtml_branch_coverage=1 00:04:06.724 --rc genhtml_function_coverage=1 00:04:06.724 --rc genhtml_legend=1 00:04:06.724 --rc geninfo_all_blocks=1 00:04:06.724 --rc geninfo_unexecuted_blocks=1 00:04:06.724 00:04:06.724 ' 00:04:06.724 10:31:37 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:06.724 10:31:37 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56596 00:04:06.724 10:31:37 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56596 00:04:06.724 10:31:37 -- common/autotest_common.sh@829 -- # '[' -z 56596 ']' 00:04:06.724 10:31:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.724 10:31:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:06.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.724 10:31:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.724 10:31:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:06.724 10:31:37 -- common/autotest_common.sh@10 -- # set +x 00:04:06.724 10:31:37 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:06.724 [2024-12-03 10:31:37.249882] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:06.724 [2024-12-03 10:31:37.250033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56596 ] 00:04:06.985 [2024-12-03 10:31:37.401735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.985 [2024-12-03 10:31:37.562411] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:06.985 [2024-12-03 10:31:37.562573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.558 10:31:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:07.558 10:31:38 -- common/autotest_common.sh@862 -- # return 0 00:04:07.558 10:31:38 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:07.818 10:31:38 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56596 00:04:07.818 10:31:38 -- common/autotest_common.sh@936 -- # '[' -z 56596 ']' 00:04:07.818 10:31:38 -- common/autotest_common.sh@940 -- # kill -0 56596 00:04:07.818 10:31:38 -- common/autotest_common.sh@941 -- # uname 00:04:07.818 10:31:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:07.818 10:31:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56596 00:04:07.818 10:31:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:07.818 10:31:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:07.818 killing process with pid 56596 00:04:07.818 10:31:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56596' 00:04:07.818 10:31:38 -- common/autotest_common.sh@955 -- # kill 56596 00:04:07.818 10:31:38 -- common/autotest_common.sh@960 -- # wait 56596 00:04:09.207 00:04:09.207 real 0m2.492s 00:04:09.207 user 0m2.633s 00:04:09.207 sys 0m0.385s 00:04:09.207 10:31:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.207 10:31:39 -- common/autotest_common.sh@10 -- # set +x 00:04:09.207 ************************************ 00:04:09.207 END TEST alias_rpc 00:04:09.207 ************************************ 00:04:09.207 10:31:39 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:04:09.207 10:31:39 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:09.207 10:31:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.207 10:31:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.208 10:31:39 -- common/autotest_common.sh@10 -- # set +x 00:04:09.208 ************************************ 00:04:09.208 START TEST spdkcli_tcp 00:04:09.208 ************************************ 00:04:09.208 10:31:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:09.208 * Looking for test storage... 00:04:09.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:09.208 10:31:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:09.208 10:31:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:09.208 10:31:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:09.208 10:31:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:09.208 10:31:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:09.208 10:31:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:09.208 10:31:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:09.208 10:31:39 -- scripts/common.sh@335 -- # IFS=.-: 00:04:09.208 10:31:39 -- scripts/common.sh@335 -- # read -ra ver1 00:04:09.208 10:31:39 -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.208 10:31:39 -- scripts/common.sh@336 -- # read -ra ver2 00:04:09.208 10:31:39 -- scripts/common.sh@337 -- # local 'op=<' 00:04:09.208 10:31:39 -- scripts/common.sh@339 -- # ver1_l=2 00:04:09.208 10:31:39 -- scripts/common.sh@340 -- # ver2_l=1 00:04:09.208 10:31:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:09.208 10:31:39 -- scripts/common.sh@343 -- # case "$op" in 00:04:09.208 10:31:39 -- scripts/common.sh@344 -- # : 1 00:04:09.208 10:31:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:09.208 10:31:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.208 10:31:39 -- scripts/common.sh@364 -- # decimal 1 00:04:09.208 10:31:39 -- scripts/common.sh@352 -- # local d=1 00:04:09.208 10:31:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.208 10:31:39 -- scripts/common.sh@354 -- # echo 1 00:04:09.208 10:31:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:09.208 10:31:39 -- scripts/common.sh@365 -- # decimal 2 00:04:09.208 10:31:39 -- scripts/common.sh@352 -- # local d=2 00:04:09.208 10:31:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.208 10:31:39 -- scripts/common.sh@354 -- # echo 2 00:04:09.208 10:31:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:09.208 10:31:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:09.208 10:31:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:09.208 10:31:39 -- scripts/common.sh@367 -- # return 0 00:04:09.208 10:31:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.208 10:31:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:09.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.208 --rc genhtml_branch_coverage=1 00:04:09.208 --rc genhtml_function_coverage=1 00:04:09.208 --rc genhtml_legend=1 00:04:09.208 --rc geninfo_all_blocks=1 00:04:09.208 --rc geninfo_unexecuted_blocks=1 00:04:09.208 00:04:09.208 ' 00:04:09.208 10:31:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:09.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.208 --rc genhtml_branch_coverage=1 00:04:09.208 --rc genhtml_function_coverage=1 00:04:09.208 --rc genhtml_legend=1 00:04:09.208 --rc geninfo_all_blocks=1 00:04:09.208 --rc geninfo_unexecuted_blocks=1 00:04:09.208 00:04:09.208 ' 00:04:09.208 10:31:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:09.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.208 --rc genhtml_branch_coverage=1 00:04:09.208 --rc genhtml_function_coverage=1 00:04:09.208 --rc genhtml_legend=1 00:04:09.208 --rc geninfo_all_blocks=1 00:04:09.208 --rc geninfo_unexecuted_blocks=1 00:04:09.208 00:04:09.208 ' 00:04:09.208 10:31:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:09.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.208 --rc genhtml_branch_coverage=1 00:04:09.208 --rc genhtml_function_coverage=1 00:04:09.208 --rc genhtml_legend=1 00:04:09.208 --rc geninfo_all_blocks=1 00:04:09.208 --rc geninfo_unexecuted_blocks=1 00:04:09.208 00:04:09.208 ' 00:04:09.208 10:31:39 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:09.208 10:31:39 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:09.208 10:31:39 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:09.208 10:31:39 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:09.208 10:31:39 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:09.208 10:31:39 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:09.208 10:31:39 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:09.208 10:31:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:09.208 10:31:39 -- common/autotest_common.sh@10 -- # set +x 00:04:09.208 10:31:39 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56691 00:04:09.208 10:31:39 -- spdkcli/tcp.sh@27 -- # waitforlisten 56691 00:04:09.208 10:31:39 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:09.208 10:31:39 -- common/autotest_common.sh@829 -- # '[' -z 56691 ']' 00:04:09.208 10:31:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.208 10:31:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:09.208 10:31:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.208 10:31:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:09.208 10:31:39 -- common/autotest_common.sh@10 -- # set +x 00:04:09.208 [2024-12-03 10:31:39.783716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:09.208 [2024-12-03 10:31:39.783830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56691 ] 00:04:09.470 [2024-12-03 10:31:39.930205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:09.730 [2024-12-03 10:31:40.086157] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:09.730 [2024-12-03 10:31:40.086432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.730 [2024-12-03 10:31:40.086456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.992 10:31:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:09.992 10:31:40 -- common/autotest_common.sh@862 -- # return 0 00:04:09.992 10:31:40 -- spdkcli/tcp.sh@31 -- # socat_pid=56702 00:04:09.992 10:31:40 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:09.992 10:31:40 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:10.253 [ 00:04:10.253 "bdev_malloc_delete", 00:04:10.253 "bdev_malloc_create", 00:04:10.253 "bdev_null_resize", 00:04:10.253 "bdev_null_delete", 00:04:10.253 "bdev_null_create", 00:04:10.253 "bdev_nvme_cuse_unregister", 00:04:10.253 "bdev_nvme_cuse_register", 00:04:10.253 "bdev_opal_new_user", 00:04:10.253 "bdev_opal_set_lock_state", 00:04:10.253 "bdev_opal_delete", 00:04:10.253 "bdev_opal_get_info", 00:04:10.253 "bdev_opal_create", 00:04:10.253 "bdev_nvme_opal_revert", 00:04:10.253 "bdev_nvme_opal_init", 00:04:10.253 "bdev_nvme_send_cmd", 00:04:10.253 "bdev_nvme_get_path_iostat", 00:04:10.253 "bdev_nvme_get_mdns_discovery_info", 00:04:10.253 "bdev_nvme_stop_mdns_discovery", 00:04:10.253 "bdev_nvme_start_mdns_discovery", 00:04:10.253 "bdev_nvme_set_multipath_policy", 00:04:10.253 "bdev_nvme_set_preferred_path", 00:04:10.253 "bdev_nvme_get_io_paths", 00:04:10.253 "bdev_nvme_remove_error_injection", 00:04:10.253 "bdev_nvme_add_error_injection", 00:04:10.253 "bdev_nvme_get_discovery_info", 00:04:10.253 "bdev_nvme_stop_discovery", 00:04:10.253 "bdev_nvme_start_discovery", 00:04:10.253 "bdev_nvme_get_controller_health_info", 00:04:10.253 "bdev_nvme_disable_controller", 00:04:10.253 "bdev_nvme_enable_controller", 00:04:10.253 "bdev_nvme_reset_controller", 00:04:10.253 "bdev_nvme_get_transport_statistics", 00:04:10.253 "bdev_nvme_apply_firmware", 00:04:10.253 "bdev_nvme_detach_controller", 00:04:10.253 "bdev_nvme_get_controllers", 00:04:10.253 "bdev_nvme_attach_controller", 00:04:10.253 "bdev_nvme_set_hotplug", 00:04:10.253 "bdev_nvme_set_options", 00:04:10.253 "bdev_passthru_delete", 00:04:10.253 "bdev_passthru_create", 00:04:10.253 "bdev_lvol_grow_lvstore", 00:04:10.253 "bdev_lvol_get_lvols", 00:04:10.253 "bdev_lvol_get_lvstores", 00:04:10.253 "bdev_lvol_delete", 00:04:10.253 "bdev_lvol_set_read_only", 00:04:10.253 "bdev_lvol_resize", 00:04:10.253 "bdev_lvol_decouple_parent", 00:04:10.253 "bdev_lvol_inflate", 00:04:10.253 "bdev_lvol_rename", 00:04:10.253 "bdev_lvol_clone_bdev", 00:04:10.253 "bdev_lvol_clone", 00:04:10.253 "bdev_lvol_snapshot", 00:04:10.253 "bdev_lvol_create", 00:04:10.253 "bdev_lvol_delete_lvstore", 00:04:10.253 "bdev_lvol_rename_lvstore", 00:04:10.253 "bdev_lvol_create_lvstore", 00:04:10.253 "bdev_raid_set_options", 00:04:10.253 "bdev_raid_remove_base_bdev", 00:04:10.253 "bdev_raid_add_base_bdev", 00:04:10.253 "bdev_raid_delete", 00:04:10.253 "bdev_raid_create", 00:04:10.253 "bdev_raid_get_bdevs", 00:04:10.253 "bdev_error_inject_error", 00:04:10.253 "bdev_error_delete", 00:04:10.253 "bdev_error_create", 00:04:10.253 "bdev_split_delete", 00:04:10.253 "bdev_split_create", 00:04:10.253 "bdev_delay_delete", 00:04:10.253 "bdev_delay_create", 00:04:10.253 "bdev_delay_update_latency", 00:04:10.253 "bdev_zone_block_delete", 00:04:10.253 "bdev_zone_block_create", 00:04:10.253 "blobfs_create", 00:04:10.253 "blobfs_detect", 00:04:10.253 "blobfs_set_cache_size", 00:04:10.253 "bdev_xnvme_delete", 00:04:10.253 "bdev_xnvme_create", 00:04:10.253 "bdev_aio_delete", 00:04:10.253 "bdev_aio_rescan", 00:04:10.253 "bdev_aio_create", 00:04:10.253 "bdev_ftl_set_property", 00:04:10.253 "bdev_ftl_get_properties", 00:04:10.253 "bdev_ftl_get_stats", 00:04:10.253 "bdev_ftl_unmap", 00:04:10.253 "bdev_ftl_unload", 00:04:10.253 "bdev_ftl_delete", 00:04:10.253 "bdev_ftl_load", 00:04:10.253 "bdev_ftl_create", 00:04:10.253 "bdev_virtio_attach_controller", 00:04:10.253 "bdev_virtio_scsi_get_devices", 00:04:10.253 "bdev_virtio_detach_controller", 00:04:10.253 "bdev_virtio_blk_set_hotplug", 00:04:10.253 "bdev_iscsi_delete", 00:04:10.253 "bdev_iscsi_create", 00:04:10.253 "bdev_iscsi_set_options", 00:04:10.253 "accel_error_inject_error", 00:04:10.253 "ioat_scan_accel_module", 00:04:10.253 "dsa_scan_accel_module", 00:04:10.253 "iaa_scan_accel_module", 00:04:10.253 "iscsi_set_options", 00:04:10.253 "iscsi_get_auth_groups", 00:04:10.253 "iscsi_auth_group_remove_secret", 00:04:10.253 "iscsi_auth_group_add_secret", 00:04:10.253 "iscsi_delete_auth_group", 00:04:10.253 "iscsi_create_auth_group", 00:04:10.253 "iscsi_set_discovery_auth", 00:04:10.253 "iscsi_get_options", 00:04:10.253 "iscsi_target_node_request_logout", 00:04:10.253 "iscsi_target_node_set_redirect", 00:04:10.253 "iscsi_target_node_set_auth", 00:04:10.253 "iscsi_target_node_add_lun", 00:04:10.253 "iscsi_get_connections", 00:04:10.253 "iscsi_portal_group_set_auth", 00:04:10.253 "iscsi_start_portal_group", 00:04:10.253 "iscsi_delete_portal_group", 00:04:10.253 "iscsi_create_portal_group", 00:04:10.253 "iscsi_get_portal_groups", 00:04:10.253 "iscsi_delete_target_node", 00:04:10.253 "iscsi_target_node_remove_pg_ig_maps", 00:04:10.253 "iscsi_target_node_add_pg_ig_maps", 00:04:10.253 "iscsi_create_target_node", 00:04:10.253 "iscsi_get_target_nodes", 00:04:10.253 "iscsi_delete_initiator_group", 00:04:10.253 "iscsi_initiator_group_remove_initiators", 00:04:10.253 "iscsi_initiator_group_add_initiators", 00:04:10.253 "iscsi_create_initiator_group", 00:04:10.253 "iscsi_get_initiator_groups", 00:04:10.253 "nvmf_set_crdt", 00:04:10.253 "nvmf_set_config", 00:04:10.253 "nvmf_set_max_subsystems", 00:04:10.253 "nvmf_subsystem_get_listeners", 00:04:10.253 "nvmf_subsystem_get_qpairs", 00:04:10.253 "nvmf_subsystem_get_controllers", 00:04:10.253 "nvmf_get_stats", 00:04:10.253 "nvmf_get_transports", 00:04:10.253 "nvmf_create_transport", 00:04:10.253 "nvmf_get_targets", 00:04:10.253 "nvmf_delete_target", 00:04:10.253 "nvmf_create_target", 00:04:10.253 "nvmf_subsystem_allow_any_host", 00:04:10.253 "nvmf_subsystem_remove_host", 00:04:10.253 "nvmf_subsystem_add_host", 00:04:10.253 "nvmf_subsystem_remove_ns", 00:04:10.253 "nvmf_subsystem_add_ns", 00:04:10.253 "nvmf_subsystem_listener_set_ana_state", 00:04:10.253 "nvmf_discovery_get_referrals", 00:04:10.253 "nvmf_discovery_remove_referral", 00:04:10.253 "nvmf_discovery_add_referral", 00:04:10.253 "nvmf_subsystem_remove_listener", 00:04:10.253 "nvmf_subsystem_add_listener", 00:04:10.253 "nvmf_delete_subsystem", 00:04:10.253 "nvmf_create_subsystem", 00:04:10.253 "nvmf_get_subsystems", 00:04:10.253 "env_dpdk_get_mem_stats", 00:04:10.253 "nbd_get_disks", 00:04:10.253 "nbd_stop_disk", 00:04:10.253 "nbd_start_disk", 00:04:10.253 "ublk_recover_disk", 00:04:10.253 "ublk_get_disks", 00:04:10.253 "ublk_stop_disk", 00:04:10.253 "ublk_start_disk", 00:04:10.253 "ublk_destroy_target", 00:04:10.253 "ublk_create_target", 00:04:10.253 "virtio_blk_create_transport", 00:04:10.253 "virtio_blk_get_transports", 00:04:10.253 "vhost_controller_set_coalescing", 00:04:10.253 "vhost_get_controllers", 00:04:10.253 "vhost_delete_controller", 00:04:10.253 "vhost_create_blk_controller", 00:04:10.253 "vhost_scsi_controller_remove_target", 00:04:10.253 "vhost_scsi_controller_add_target", 00:04:10.253 "vhost_start_scsi_controller", 00:04:10.253 "vhost_create_scsi_controller", 00:04:10.253 "thread_set_cpumask", 00:04:10.254 "framework_get_scheduler", 00:04:10.254 "framework_set_scheduler", 00:04:10.254 "framework_get_reactors", 00:04:10.254 "thread_get_io_channels", 00:04:10.254 "thread_get_pollers", 00:04:10.254 "thread_get_stats", 00:04:10.254 "framework_monitor_context_switch", 00:04:10.254 "spdk_kill_instance", 00:04:10.254 "log_enable_timestamps", 00:04:10.254 "log_get_flags", 00:04:10.254 "log_clear_flag", 00:04:10.254 "log_set_flag", 00:04:10.254 "log_get_level", 00:04:10.254 "log_set_level", 00:04:10.254 "log_get_print_level", 00:04:10.254 "log_set_print_level", 00:04:10.254 "framework_enable_cpumask_locks", 00:04:10.254 "framework_disable_cpumask_locks", 00:04:10.254 "framework_wait_init", 00:04:10.254 "framework_start_init", 00:04:10.254 "scsi_get_devices", 00:04:10.254 "bdev_get_histogram", 00:04:10.254 "bdev_enable_histogram", 00:04:10.254 "bdev_set_qos_limit", 00:04:10.254 "bdev_set_qd_sampling_period", 00:04:10.254 "bdev_get_bdevs", 00:04:10.254 "bdev_reset_iostat", 00:04:10.254 "bdev_get_iostat", 00:04:10.254 "bdev_examine", 00:04:10.254 "bdev_wait_for_examine", 00:04:10.254 "bdev_set_options", 00:04:10.254 "notify_get_notifications", 00:04:10.254 "notify_get_types", 00:04:10.254 "accel_get_stats", 00:04:10.254 "accel_set_options", 00:04:10.254 "accel_set_driver", 00:04:10.254 "accel_crypto_key_destroy", 00:04:10.254 "accel_crypto_keys_get", 00:04:10.254 "accel_crypto_key_create", 00:04:10.254 "accel_assign_opc", 00:04:10.254 "accel_get_module_info", 00:04:10.254 "accel_get_opc_assignments", 00:04:10.254 "vmd_rescan", 00:04:10.254 "vmd_remove_device", 00:04:10.254 "vmd_enable", 00:04:10.254 "sock_set_default_impl", 00:04:10.254 "sock_impl_set_options", 00:04:10.254 "sock_impl_get_options", 00:04:10.254 "iobuf_get_stats", 00:04:10.254 "iobuf_set_options", 00:04:10.254 "framework_get_pci_devices", 00:04:10.254 "framework_get_config", 00:04:10.254 "framework_get_subsystems", 00:04:10.254 "trace_get_info", 00:04:10.254 "trace_get_tpoint_group_mask", 00:04:10.254 "trace_disable_tpoint_group", 00:04:10.254 "trace_enable_tpoint_group", 00:04:10.254 "trace_clear_tpoint_mask", 00:04:10.254 "trace_set_tpoint_mask", 00:04:10.254 "spdk_get_version", 00:04:10.254 "rpc_get_methods" 00:04:10.254 ] 00:04:10.254 10:31:40 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:10.254 10:31:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:10.254 10:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:10.515 10:31:40 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:10.515 10:31:40 -- spdkcli/tcp.sh@38 -- # killprocess 56691 00:04:10.515 10:31:40 -- common/autotest_common.sh@936 -- # '[' -z 56691 ']' 00:04:10.515 10:31:40 -- common/autotest_common.sh@940 -- # kill -0 56691 00:04:10.515 10:31:40 -- common/autotest_common.sh@941 -- # uname 00:04:10.515 10:31:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:10.515 10:31:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56691 00:04:10.515 killing process with pid 56691 00:04:10.515 10:31:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:10.515 10:31:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:10.515 10:31:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56691' 00:04:10.515 10:31:40 -- common/autotest_common.sh@955 -- # kill 56691 00:04:10.515 10:31:40 -- common/autotest_common.sh@960 -- # wait 56691 00:04:11.902 ************************************ 00:04:11.902 END TEST spdkcli_tcp 00:04:11.902 ************************************ 00:04:11.902 00:04:11.902 real 0m2.521s 00:04:11.902 user 0m4.429s 00:04:11.902 sys 0m0.420s 00:04:11.902 10:31:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:11.902 10:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:11.902 10:31:42 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:11.902 10:31:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.902 10:31:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.902 10:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:11.902 ************************************ 00:04:11.902 START TEST dpdk_mem_utility 00:04:11.902 ************************************ 00:04:11.902 10:31:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:11.902 * Looking for test storage... 00:04:11.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:11.902 10:31:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:11.902 10:31:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:11.902 10:31:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:11.902 10:31:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:11.902 10:31:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:11.902 10:31:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:11.902 10:31:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:11.902 10:31:42 -- scripts/common.sh@335 -- # IFS=.-: 00:04:11.902 10:31:42 -- scripts/common.sh@335 -- # read -ra ver1 00:04:11.903 10:31:42 -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.903 10:31:42 -- scripts/common.sh@336 -- # read -ra ver2 00:04:11.903 10:31:42 -- scripts/common.sh@337 -- # local 'op=<' 00:04:11.903 10:31:42 -- scripts/common.sh@339 -- # ver1_l=2 00:04:11.903 10:31:42 -- scripts/common.sh@340 -- # ver2_l=1 00:04:11.903 10:31:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:11.903 10:31:42 -- scripts/common.sh@343 -- # case "$op" in 00:04:11.903 10:31:42 -- scripts/common.sh@344 -- # : 1 00:04:11.903 10:31:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:11.903 10:31:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.903 10:31:42 -- scripts/common.sh@364 -- # decimal 1 00:04:11.903 10:31:42 -- scripts/common.sh@352 -- # local d=1 00:04:11.903 10:31:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.903 10:31:42 -- scripts/common.sh@354 -- # echo 1 00:04:11.903 10:31:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:11.903 10:31:42 -- scripts/common.sh@365 -- # decimal 2 00:04:11.903 10:31:42 -- scripts/common.sh@352 -- # local d=2 00:04:11.903 10:31:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.903 10:31:42 -- scripts/common.sh@354 -- # echo 2 00:04:11.903 10:31:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:11.903 10:31:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:11.903 10:31:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:11.903 10:31:42 -- scripts/common.sh@367 -- # return 0 00:04:11.903 10:31:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.903 10:31:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:11.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.903 --rc genhtml_branch_coverage=1 00:04:11.903 --rc genhtml_function_coverage=1 00:04:11.903 --rc genhtml_legend=1 00:04:11.903 --rc geninfo_all_blocks=1 00:04:11.903 --rc geninfo_unexecuted_blocks=1 00:04:11.903 00:04:11.903 ' 00:04:11.903 10:31:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:11.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.903 --rc genhtml_branch_coverage=1 00:04:11.903 --rc genhtml_function_coverage=1 00:04:11.903 --rc genhtml_legend=1 00:04:11.903 --rc geninfo_all_blocks=1 00:04:11.903 --rc geninfo_unexecuted_blocks=1 00:04:11.903 00:04:11.903 ' 00:04:11.903 10:31:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:11.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.903 --rc genhtml_branch_coverage=1 00:04:11.903 --rc genhtml_function_coverage=1 00:04:11.903 --rc genhtml_legend=1 00:04:11.903 --rc geninfo_all_blocks=1 00:04:11.903 --rc geninfo_unexecuted_blocks=1 00:04:11.903 00:04:11.903 ' 00:04:11.903 10:31:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:11.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.903 --rc genhtml_branch_coverage=1 00:04:11.903 --rc genhtml_function_coverage=1 00:04:11.903 --rc genhtml_legend=1 00:04:11.903 --rc geninfo_all_blocks=1 00:04:11.903 --rc geninfo_unexecuted_blocks=1 00:04:11.903 00:04:11.903 ' 00:04:11.903 10:31:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:11.903 10:31:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56790 00:04:11.903 10:31:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56790 00:04:11.903 10:31:42 -- common/autotest_common.sh@829 -- # '[' -z 56790 ']' 00:04:11.903 10:31:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.903 10:31:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.903 10:31:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.903 10:31:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.903 10:31:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.903 10:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:11.903 [2024-12-03 10:31:42.317928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:11.903 [2024-12-03 10:31:42.318017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56790 ] 00:04:11.903 [2024-12-03 10:31:42.458164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.163 [2024-12-03 10:31:42.599688] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:12.163 [2024-12-03 10:31:42.599841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.735 10:31:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:12.735 10:31:43 -- common/autotest_common.sh@862 -- # return 0 00:04:12.735 10:31:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:12.735 10:31:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:12.735 10:31:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.735 10:31:43 -- common/autotest_common.sh@10 -- # set +x 00:04:12.735 { 00:04:12.735 "filename": "/tmp/spdk_mem_dump.txt" 00:04:12.735 } 00:04:12.735 10:31:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.735 10:31:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:12.735 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:12.735 1 heaps totaling size 820.000000 MiB 00:04:12.735 size: 820.000000 MiB heap id: 0 00:04:12.735 end heaps---------- 00:04:12.735 8 mempools totaling size 598.116089 MiB 00:04:12.735 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:12.735 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:12.735 size: 84.521057 MiB name: bdev_io_56790 00:04:12.735 size: 51.011292 MiB name: evtpool_56790 00:04:12.735 size: 50.003479 MiB name: msgpool_56790 00:04:12.735 size: 21.763794 MiB name: PDU_Pool 00:04:12.735 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:12.735 size: 0.026123 MiB name: Session_Pool 00:04:12.735 end mempools------- 00:04:12.735 6 memzones totaling size 4.142822 MiB 00:04:12.735 size: 1.000366 MiB name: RG_ring_0_56790 00:04:12.735 size: 1.000366 MiB name: RG_ring_1_56790 00:04:12.735 size: 1.000366 MiB name: RG_ring_4_56790 00:04:12.735 size: 1.000366 MiB name: RG_ring_5_56790 00:04:12.735 size: 0.125366 MiB name: RG_ring_2_56790 00:04:12.735 size: 0.015991 MiB name: RG_ring_3_56790 00:04:12.735 end memzones------- 00:04:12.735 10:31:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:12.735 heap id: 0 total size: 820.000000 MiB number of busy elements: 307 number of free elements: 18 00:04:12.735 list of free elements. size: 18.449829 MiB 00:04:12.735 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:12.735 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:12.735 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:12.735 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:12.735 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:12.735 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:12.735 element at address: 0x200019600000 with size: 0.999084 MiB 00:04:12.735 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:12.735 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:12.735 element at address: 0x200018e00000 with size: 0.959656 MiB 00:04:12.735 element at address: 0x200019900040 with size: 0.936401 MiB 00:04:12.735 element at address: 0x200000200000 with size: 0.829224 MiB 00:04:12.735 element at address: 0x20001b000000 with size: 0.563416 MiB 00:04:12.735 element at address: 0x200019200000 with size: 0.487976 MiB 00:04:12.735 element at address: 0x200019a00000 with size: 0.485413 MiB 00:04:12.735 element at address: 0x200013800000 with size: 0.467651 MiB 00:04:12.735 element at address: 0x200028400000 with size: 0.390442 MiB 00:04:12.735 element at address: 0x200003a00000 with size: 0.351990 MiB 00:04:12.735 list of standard malloc elements. size: 199.285767 MiB 00:04:12.735 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:12.735 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:12.735 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:12.735 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:12.735 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:12.735 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:12.735 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:12.735 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:12.735 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:04:12.735 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:04:12.735 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:04:12.735 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:12.735 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:12.735 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:04:12.735 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200013877b80 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200013877c80 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200013877d80 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200013877e80 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200013877f80 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200013878080 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200013878180 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200013878280 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200013878380 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200013878480 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200013878580 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x200019abc680 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:04:12.736 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:04:12.737 element at address: 0x200028463f40 with size: 0.000244 MiB 00:04:12.737 element at address: 0x200028464040 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846af80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846b080 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846b180 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846b280 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846b380 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846b480 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846b580 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846b680 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846b780 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846b880 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846b980 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846be80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846c080 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846c180 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846c280 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846c380 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846c480 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846c580 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846c680 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846c780 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846c880 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846c980 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846d080 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846d180 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846d280 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846d380 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846d480 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846d580 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846d680 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846d780 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846d880 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846d980 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846da80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846db80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846de80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846df80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846e080 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846e180 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846e280 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846e380 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846e480 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846e580 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846e680 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846e780 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846e880 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846e980 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846f080 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846f180 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846f280 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846f380 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846f480 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846f580 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846f680 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846f780 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846f880 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846f980 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:04:12.737 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:04:12.737 list of memzone associated elements. size: 602.264404 MiB 00:04:12.737 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:12.737 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:12.737 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:12.737 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:12.737 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:12.737 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56790_0 00:04:12.737 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:12.737 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56790_0 00:04:12.737 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:12.737 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56790_0 00:04:12.737 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:12.737 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:12.737 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:12.737 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:12.737 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:12.737 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56790 00:04:12.737 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:12.737 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56790 00:04:12.737 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:12.737 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56790 00:04:12.737 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:12.737 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:12.737 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:12.737 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:12.737 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:12.737 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:12.737 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:12.737 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:12.737 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:12.737 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56790 00:04:12.737 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:12.737 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56790 00:04:12.737 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:12.737 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56790 00:04:12.737 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:12.737 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56790 00:04:12.737 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:12.737 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56790 00:04:12.737 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:04:12.737 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:12.737 element at address: 0x200013878680 with size: 0.500549 MiB 00:04:12.737 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:12.737 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:04:12.737 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:12.737 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:12.737 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56790 00:04:12.737 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:04:12.737 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:12.737 element at address: 0x200028464140 with size: 0.023804 MiB 00:04:12.737 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:12.737 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:12.737 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56790 00:04:12.737 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:04:12.737 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:12.737 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:04:12.737 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56790 00:04:12.737 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:12.737 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56790 00:04:12.737 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:04:12.737 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:12.737 10:31:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:12.737 10:31:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56790 00:04:12.737 10:31:43 -- common/autotest_common.sh@936 -- # '[' -z 56790 ']' 00:04:12.737 10:31:43 -- common/autotest_common.sh@940 -- # kill -0 56790 00:04:12.738 10:31:43 -- common/autotest_common.sh@941 -- # uname 00:04:12.738 10:31:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:12.738 10:31:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56790 00:04:12.738 killing process with pid 56790 00:04:12.738 10:31:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:12.738 10:31:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:12.738 10:31:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56790' 00:04:12.738 10:31:43 -- common/autotest_common.sh@955 -- # kill 56790 00:04:12.738 10:31:43 -- common/autotest_common.sh@960 -- # wait 56790 00:04:14.121 ************************************ 00:04:14.121 END TEST dpdk_mem_utility 00:04:14.121 ************************************ 00:04:14.121 00:04:14.121 real 0m2.344s 00:04:14.121 user 0m2.349s 00:04:14.121 sys 0m0.380s 00:04:14.121 10:31:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:14.121 10:31:44 -- common/autotest_common.sh@10 -- # set +x 00:04:14.121 10:31:44 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:14.121 10:31:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.121 10:31:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.121 10:31:44 -- common/autotest_common.sh@10 -- # set +x 00:04:14.121 ************************************ 00:04:14.121 START TEST event 00:04:14.121 ************************************ 00:04:14.121 10:31:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:14.121 * Looking for test storage... 00:04:14.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:14.121 10:31:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:14.121 10:31:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:14.121 10:31:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:14.121 10:31:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:14.121 10:31:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:14.121 10:31:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:14.121 10:31:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:14.121 10:31:44 -- scripts/common.sh@335 -- # IFS=.-: 00:04:14.121 10:31:44 -- scripts/common.sh@335 -- # read -ra ver1 00:04:14.121 10:31:44 -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.121 10:31:44 -- scripts/common.sh@336 -- # read -ra ver2 00:04:14.121 10:31:44 -- scripts/common.sh@337 -- # local 'op=<' 00:04:14.121 10:31:44 -- scripts/common.sh@339 -- # ver1_l=2 00:04:14.121 10:31:44 -- scripts/common.sh@340 -- # ver2_l=1 00:04:14.121 10:31:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:14.121 10:31:44 -- scripts/common.sh@343 -- # case "$op" in 00:04:14.121 10:31:44 -- scripts/common.sh@344 -- # : 1 00:04:14.121 10:31:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:14.121 10:31:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.121 10:31:44 -- scripts/common.sh@364 -- # decimal 1 00:04:14.121 10:31:44 -- scripts/common.sh@352 -- # local d=1 00:04:14.121 10:31:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.121 10:31:44 -- scripts/common.sh@354 -- # echo 1 00:04:14.121 10:31:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:14.121 10:31:44 -- scripts/common.sh@365 -- # decimal 2 00:04:14.121 10:31:44 -- scripts/common.sh@352 -- # local d=2 00:04:14.121 10:31:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.121 10:31:44 -- scripts/common.sh@354 -- # echo 2 00:04:14.121 10:31:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:14.121 10:31:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:14.121 10:31:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:14.121 10:31:44 -- scripts/common.sh@367 -- # return 0 00:04:14.121 10:31:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.121 10:31:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:14.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.121 --rc genhtml_branch_coverage=1 00:04:14.121 --rc genhtml_function_coverage=1 00:04:14.121 --rc genhtml_legend=1 00:04:14.121 --rc geninfo_all_blocks=1 00:04:14.121 --rc geninfo_unexecuted_blocks=1 00:04:14.121 00:04:14.121 ' 00:04:14.121 10:31:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:14.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.121 --rc genhtml_branch_coverage=1 00:04:14.121 --rc genhtml_function_coverage=1 00:04:14.121 --rc genhtml_legend=1 00:04:14.121 --rc geninfo_all_blocks=1 00:04:14.121 --rc geninfo_unexecuted_blocks=1 00:04:14.121 00:04:14.121 ' 00:04:14.121 10:31:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:14.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.121 --rc genhtml_branch_coverage=1 00:04:14.121 --rc genhtml_function_coverage=1 00:04:14.121 --rc genhtml_legend=1 00:04:14.121 --rc geninfo_all_blocks=1 00:04:14.121 --rc geninfo_unexecuted_blocks=1 00:04:14.121 00:04:14.121 ' 00:04:14.121 10:31:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:14.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.121 --rc genhtml_branch_coverage=1 00:04:14.121 --rc genhtml_function_coverage=1 00:04:14.121 --rc genhtml_legend=1 00:04:14.121 --rc geninfo_all_blocks=1 00:04:14.121 --rc geninfo_unexecuted_blocks=1 00:04:14.121 00:04:14.121 ' 00:04:14.121 10:31:44 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:14.121 10:31:44 -- bdev/nbd_common.sh@6 -- # set -e 00:04:14.121 10:31:44 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:14.121 10:31:44 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:14.121 10:31:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.121 10:31:44 -- common/autotest_common.sh@10 -- # set +x 00:04:14.121 ************************************ 00:04:14.121 START TEST event_perf 00:04:14.121 ************************************ 00:04:14.121 10:31:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:14.121 Running I/O for 1 seconds...[2024-12-03 10:31:44.677237] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:14.121 [2024-12-03 10:31:44.677338] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56875 ] 00:04:14.383 [2024-12-03 10:31:44.825849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:14.383 [2024-12-03 10:31:44.977018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.383 [2024-12-03 10:31:44.977272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:14.383 [2024-12-03 10:31:44.977564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.383 [2024-12-03 10:31:44.977581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:15.770 Running I/O for 1 seconds... 00:04:15.770 lcore 0: 213391 00:04:15.770 lcore 1: 213392 00:04:15.770 lcore 2: 213395 00:04:15.770 lcore 3: 213390 00:04:15.770 done. 00:04:15.770 00:04:15.770 ************************************ 00:04:15.770 END TEST event_perf 00:04:15.770 ************************************ 00:04:15.770 real 0m1.542s 00:04:15.770 user 0m4.336s 00:04:15.770 sys 0m0.090s 00:04:15.770 10:31:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:15.770 10:31:46 -- common/autotest_common.sh@10 -- # set +x 00:04:15.770 10:31:46 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:15.770 10:31:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:15.770 10:31:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.770 10:31:46 -- common/autotest_common.sh@10 -- # set +x 00:04:15.770 ************************************ 00:04:15.770 START TEST event_reactor 00:04:15.770 ************************************ 00:04:15.770 10:31:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:15.770 [2024-12-03 10:31:46.259177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:15.770 [2024-12-03 10:31:46.259429] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56920 ] 00:04:16.031 [2024-12-03 10:31:46.405952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.031 [2024-12-03 10:31:46.546075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.414 test_start 00:04:17.414 oneshot 00:04:17.414 tick 100 00:04:17.414 tick 100 00:04:17.414 tick 250 00:04:17.414 tick 100 00:04:17.414 tick 100 00:04:17.414 tick 250 00:04:17.414 tick 100 00:04:17.414 tick 500 00:04:17.414 tick 100 00:04:17.414 tick 100 00:04:17.414 tick 250 00:04:17.414 tick 100 00:04:17.414 tick 100 00:04:17.414 test_end 00:04:17.414 00:04:17.414 real 0m1.509s 00:04:17.414 user 0m1.336s 00:04:17.414 sys 0m0.065s 00:04:17.414 ************************************ 00:04:17.414 END TEST event_reactor 00:04:17.414 ************************************ 00:04:17.414 10:31:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.414 10:31:47 -- common/autotest_common.sh@10 -- # set +x 00:04:17.414 10:31:47 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:17.414 10:31:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:17.414 10:31:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.414 10:31:47 -- common/autotest_common.sh@10 -- # set +x 00:04:17.414 ************************************ 00:04:17.414 START TEST event_reactor_perf 00:04:17.414 ************************************ 00:04:17.414 10:31:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:17.414 [2024-12-03 10:31:47.806859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:17.414 [2024-12-03 10:31:47.807161] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56962 ] 00:04:17.414 [2024-12-03 10:31:47.955671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.674 [2024-12-03 10:31:48.124380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.051 test_start 00:04:19.051 test_end 00:04:19.051 Performance: 315774 events per second 00:04:19.051 00:04:19.051 real 0m1.606s 00:04:19.051 user 0m1.422s 00:04:19.051 sys 0m0.076s 00:04:19.051 10:31:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.051 10:31:49 -- common/autotest_common.sh@10 -- # set +x 00:04:19.051 ************************************ 00:04:19.051 END TEST event_reactor_perf 00:04:19.051 ************************************ 00:04:19.051 10:31:49 -- event/event.sh@49 -- # uname -s 00:04:19.051 10:31:49 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:19.051 10:31:49 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:19.051 10:31:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.051 10:31:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.051 10:31:49 -- common/autotest_common.sh@10 -- # set +x 00:04:19.051 ************************************ 00:04:19.051 START TEST event_scheduler 00:04:19.051 ************************************ 00:04:19.051 10:31:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:19.051 * Looking for test storage... 00:04:19.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:19.051 10:31:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:19.051 10:31:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:19.051 10:31:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:19.051 10:31:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:19.051 10:31:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:19.051 10:31:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:19.051 10:31:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:19.051 10:31:49 -- scripts/common.sh@335 -- # IFS=.-: 00:04:19.051 10:31:49 -- scripts/common.sh@335 -- # read -ra ver1 00:04:19.051 10:31:49 -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.051 10:31:49 -- scripts/common.sh@336 -- # read -ra ver2 00:04:19.051 10:31:49 -- scripts/common.sh@337 -- # local 'op=<' 00:04:19.051 10:31:49 -- scripts/common.sh@339 -- # ver1_l=2 00:04:19.051 10:31:49 -- scripts/common.sh@340 -- # ver2_l=1 00:04:19.051 10:31:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:19.051 10:31:49 -- scripts/common.sh@343 -- # case "$op" in 00:04:19.051 10:31:49 -- scripts/common.sh@344 -- # : 1 00:04:19.051 10:31:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:19.051 10:31:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.051 10:31:49 -- scripts/common.sh@364 -- # decimal 1 00:04:19.051 10:31:49 -- scripts/common.sh@352 -- # local d=1 00:04:19.051 10:31:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.051 10:31:49 -- scripts/common.sh@354 -- # echo 1 00:04:19.051 10:31:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:19.051 10:31:49 -- scripts/common.sh@365 -- # decimal 2 00:04:19.051 10:31:49 -- scripts/common.sh@352 -- # local d=2 00:04:19.051 10:31:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.051 10:31:49 -- scripts/common.sh@354 -- # echo 2 00:04:19.051 10:31:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:19.051 10:31:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:19.051 10:31:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:19.051 10:31:49 -- scripts/common.sh@367 -- # return 0 00:04:19.051 10:31:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.051 10:31:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:19.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.051 --rc genhtml_branch_coverage=1 00:04:19.051 --rc genhtml_function_coverage=1 00:04:19.051 --rc genhtml_legend=1 00:04:19.051 --rc geninfo_all_blocks=1 00:04:19.051 --rc geninfo_unexecuted_blocks=1 00:04:19.051 00:04:19.051 ' 00:04:19.051 10:31:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:19.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.051 --rc genhtml_branch_coverage=1 00:04:19.051 --rc genhtml_function_coverage=1 00:04:19.051 --rc genhtml_legend=1 00:04:19.051 --rc geninfo_all_blocks=1 00:04:19.051 --rc geninfo_unexecuted_blocks=1 00:04:19.051 00:04:19.051 ' 00:04:19.051 10:31:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:19.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.052 --rc genhtml_branch_coverage=1 00:04:19.052 --rc genhtml_function_coverage=1 00:04:19.052 --rc genhtml_legend=1 00:04:19.052 --rc geninfo_all_blocks=1 00:04:19.052 --rc geninfo_unexecuted_blocks=1 00:04:19.052 00:04:19.052 ' 00:04:19.052 10:31:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:19.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.052 --rc genhtml_branch_coverage=1 00:04:19.052 --rc genhtml_function_coverage=1 00:04:19.052 --rc genhtml_legend=1 00:04:19.052 --rc geninfo_all_blocks=1 00:04:19.052 --rc geninfo_unexecuted_blocks=1 00:04:19.052 00:04:19.052 ' 00:04:19.052 10:31:49 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:19.052 10:31:49 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57026 00:04:19.052 10:31:49 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.052 10:31:49 -- scheduler/scheduler.sh@37 -- # waitforlisten 57026 00:04:19.052 10:31:49 -- common/autotest_common.sh@829 -- # '[' -z 57026 ']' 00:04:19.052 10:31:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.052 10:31:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.052 10:31:49 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:19.052 10:31:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.052 10:31:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.052 10:31:49 -- common/autotest_common.sh@10 -- # set +x 00:04:19.052 [2024-12-03 10:31:49.624604] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:19.052 [2024-12-03 10:31:49.624897] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57026 ] 00:04:19.309 [2024-12-03 10:31:49.775780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:19.566 [2024-12-03 10:31:49.961110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.566 [2024-12-03 10:31:49.961259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.566 [2024-12-03 10:31:49.961554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:19.566 [2024-12-03 10:31:49.961581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:19.826 10:31:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:19.826 10:31:50 -- common/autotest_common.sh@862 -- # return 0 00:04:19.826 10:31:50 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:19.826 10:31:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.826 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:19.826 POWER: Env isn't set yet! 00:04:19.826 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:19.826 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:19.826 POWER: Cannot set governor of lcore 0 to userspace 00:04:19.826 POWER: Attempting to initialise PSTAT power management... 00:04:19.826 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:19.826 POWER: Cannot set governor of lcore 0 to performance 00:04:19.826 POWER: Attempting to initialise AMD PSTATE power management... 00:04:19.826 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:19.826 POWER: Cannot set governor of lcore 0 to userspace 00:04:19.826 POWER: Attempting to initialise CPPC power management... 00:04:19.826 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:19.826 POWER: Cannot set governor of lcore 0 to userspace 00:04:19.826 POWER: Attempting to initialise VM power management... 00:04:19.826 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:19.826 POWER: Unable to set Power Management Environment for lcore 0 00:04:19.826 [2024-12-03 10:31:50.419360] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:19.826 [2024-12-03 10:31:50.419389] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:19.826 [2024-12-03 10:31:50.419446] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:19.826 [2024-12-03 10:31:50.419478] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:19.826 [2024-12-03 10:31:50.419500] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:19.826 [2024-12-03 10:31:50.419519] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:19.826 10:31:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.826 10:31:50 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:19.826 10:31:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.826 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.083 [2024-12-03 10:31:50.641334] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:20.083 10:31:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.083 10:31:50 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:20.083 10:31:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.084 10:31:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.084 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.084 ************************************ 00:04:20.084 START TEST scheduler_create_thread 00:04:20.084 ************************************ 00:04:20.084 10:31:50 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:04:20.084 10:31:50 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:20.084 10:31:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.084 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.084 2 00:04:20.084 10:31:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.084 10:31:50 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:20.084 10:31:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.084 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.084 3 00:04:20.084 10:31:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.084 10:31:50 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:20.084 10:31:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.084 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.084 4 00:04:20.084 10:31:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.084 10:31:50 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:20.084 10:31:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.084 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.343 5 00:04:20.343 10:31:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.343 10:31:50 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:20.343 10:31:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.343 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.343 6 00:04:20.343 10:31:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.343 10:31:50 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:20.343 10:31:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.343 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.343 7 00:04:20.343 10:31:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.343 10:31:50 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:20.343 10:31:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.343 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.343 8 00:04:20.343 10:31:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.343 10:31:50 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:20.343 10:31:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.343 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.343 9 00:04:20.343 10:31:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.343 10:31:50 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:20.343 10:31:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.343 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.343 10 00:04:20.343 10:31:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.343 10:31:50 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:20.343 10:31:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.343 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.343 10:31:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.343 10:31:50 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:20.343 10:31:50 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:20.343 10:31:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.343 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.343 10:31:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.343 10:31:50 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:20.343 10:31:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.343 10:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:21.718 10:31:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:21.718 10:31:52 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:21.718 10:31:52 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:21.718 10:31:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.718 10:31:52 -- common/autotest_common.sh@10 -- # set +x 00:04:23.093 ************************************ 00:04:23.093 END TEST scheduler_create_thread 00:04:23.093 ************************************ 00:04:23.093 10:31:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.093 00:04:23.093 real 0m2.614s 00:04:23.093 user 0m0.015s 00:04:23.093 sys 0m0.004s 00:04:23.093 10:31:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:23.093 10:31:53 -- common/autotest_common.sh@10 -- # set +x 00:04:23.093 10:31:53 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:23.093 10:31:53 -- scheduler/scheduler.sh@46 -- # killprocess 57026 00:04:23.093 10:31:53 -- common/autotest_common.sh@936 -- # '[' -z 57026 ']' 00:04:23.093 10:31:53 -- common/autotest_common.sh@940 -- # kill -0 57026 00:04:23.093 10:31:53 -- common/autotest_common.sh@941 -- # uname 00:04:23.093 10:31:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:23.093 10:31:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57026 00:04:23.093 killing process with pid 57026 00:04:23.093 10:31:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:23.093 10:31:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:23.093 10:31:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57026' 00:04:23.093 10:31:53 -- common/autotest_common.sh@955 -- # kill 57026 00:04:23.093 10:31:53 -- common/autotest_common.sh@960 -- # wait 57026 00:04:23.351 [2024-12-03 10:31:53.751788] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:23.918 ************************************ 00:04:23.918 END TEST event_scheduler 00:04:23.918 ************************************ 00:04:23.918 00:04:23.918 real 0m4.947s 00:04:23.918 user 0m8.277s 00:04:23.918 sys 0m0.325s 00:04:23.918 10:31:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:23.918 10:31:54 -- common/autotest_common.sh@10 -- # set +x 00:04:23.918 10:31:54 -- event/event.sh@51 -- # modprobe -n nbd 00:04:23.918 10:31:54 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:23.918 10:31:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.918 10:31:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.918 10:31:54 -- common/autotest_common.sh@10 -- # set +x 00:04:23.918 ************************************ 00:04:23.918 START TEST app_repeat 00:04:23.918 ************************************ 00:04:23.918 10:31:54 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:04:23.918 10:31:54 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.918 10:31:54 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.918 10:31:54 -- event/event.sh@13 -- # local nbd_list 00:04:23.918 10:31:54 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.918 10:31:54 -- event/event.sh@14 -- # local bdev_list 00:04:23.918 10:31:54 -- event/event.sh@15 -- # local repeat_times=4 00:04:23.918 10:31:54 -- event/event.sh@17 -- # modprobe nbd 00:04:23.918 Process app_repeat pid: 57132 00:04:23.918 spdk_app_start Round 0 00:04:23.918 10:31:54 -- event/event.sh@19 -- # repeat_pid=57132 00:04:23.918 10:31:54 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.918 10:31:54 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57132' 00:04:23.918 10:31:54 -- event/event.sh@23 -- # for i in {0..2} 00:04:23.918 10:31:54 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:23.918 10:31:54 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:23.918 10:31:54 -- event/event.sh@25 -- # waitforlisten 57132 /var/tmp/spdk-nbd.sock 00:04:23.918 10:31:54 -- common/autotest_common.sh@829 -- # '[' -z 57132 ']' 00:04:23.918 10:31:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:23.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:23.918 10:31:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:23.918 10:31:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:23.918 10:31:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:23.918 10:31:54 -- common/autotest_common.sh@10 -- # set +x 00:04:23.918 [2024-12-03 10:31:54.466357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:23.918 [2024-12-03 10:31:54.466476] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57132 ] 00:04:24.179 [2024-12-03 10:31:54.613403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.439 [2024-12-03 10:31:54.796330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.439 [2024-12-03 10:31:54.796457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.699 10:31:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:24.699 10:31:55 -- common/autotest_common.sh@862 -- # return 0 00:04:24.699 10:31:55 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.957 Malloc0 00:04:24.957 10:31:55 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.216 Malloc1 00:04:25.216 10:31:55 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@12 -- # local i 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.216 10:31:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:25.477 /dev/nbd0 00:04:25.477 10:31:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:25.477 10:31:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:25.477 10:31:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:25.477 10:31:55 -- common/autotest_common.sh@867 -- # local i 00:04:25.477 10:31:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:25.477 10:31:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:25.477 10:31:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:25.477 10:31:55 -- common/autotest_common.sh@871 -- # break 00:04:25.477 10:31:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:25.477 10:31:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:25.477 10:31:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.477 1+0 records in 00:04:25.477 1+0 records out 00:04:25.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213004 s, 19.2 MB/s 00:04:25.477 10:31:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:25.477 10:31:55 -- common/autotest_common.sh@884 -- # size=4096 00:04:25.477 10:31:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:25.477 10:31:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:25.477 10:31:55 -- common/autotest_common.sh@887 -- # return 0 00:04:25.477 10:31:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.477 10:31:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.477 10:31:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:25.739 /dev/nbd1 00:04:25.739 10:31:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:25.739 10:31:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:25.739 10:31:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:25.739 10:31:56 -- common/autotest_common.sh@867 -- # local i 00:04:25.739 10:31:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:25.739 10:31:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:25.739 10:31:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:25.739 10:31:56 -- common/autotest_common.sh@871 -- # break 00:04:25.739 10:31:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:25.739 10:31:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:25.739 10:31:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.739 1+0 records in 00:04:25.739 1+0 records out 00:04:25.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164968 s, 24.8 MB/s 00:04:25.739 10:31:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:25.739 10:31:56 -- common/autotest_common.sh@884 -- # size=4096 00:04:25.739 10:31:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:25.739 10:31:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:25.739 10:31:56 -- common/autotest_common.sh@887 -- # return 0 00:04:25.739 10:31:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.739 10:31:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.739 10:31:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.739 10:31:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.739 10:31:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.739 10:31:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.739 { 00:04:25.739 "nbd_device": "/dev/nbd0", 00:04:25.739 "bdev_name": "Malloc0" 00:04:25.739 }, 00:04:25.739 { 00:04:25.739 "nbd_device": "/dev/nbd1", 00:04:25.739 "bdev_name": "Malloc1" 00:04:25.739 } 00:04:25.739 ]' 00:04:25.739 10:31:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.739 10:31:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.739 { 00:04:25.739 "nbd_device": "/dev/nbd0", 00:04:25.739 "bdev_name": "Malloc0" 00:04:25.739 }, 00:04:25.739 { 00:04:25.739 "nbd_device": "/dev/nbd1", 00:04:25.739 "bdev_name": "Malloc1" 00:04:25.739 } 00:04:25.739 ]' 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:26.002 /dev/nbd1' 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:26.002 /dev/nbd1' 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@65 -- # count=2 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@95 -- # count=2 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:26.002 256+0 records in 00:04:26.002 256+0 records out 00:04:26.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00904684 s, 116 MB/s 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:26.002 256+0 records in 00:04:26.002 256+0 records out 00:04:26.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111771 s, 93.8 MB/s 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:26.002 256+0 records in 00:04:26.002 256+0 records out 00:04:26.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206643 s, 50.7 MB/s 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@51 -- # local i 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.002 10:31:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@41 -- # break 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@41 -- # break 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.265 10:31:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.527 10:31:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:26.527 10:31:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:26.527 10:31:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.527 10:31:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:26.527 10:31:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.527 10:31:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:26.527 10:31:57 -- bdev/nbd_common.sh@65 -- # true 00:04:26.527 10:31:57 -- bdev/nbd_common.sh@65 -- # count=0 00:04:26.527 10:31:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:26.527 10:31:57 -- bdev/nbd_common.sh@104 -- # count=0 00:04:26.527 10:31:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:26.527 10:31:57 -- bdev/nbd_common.sh@109 -- # return 0 00:04:26.527 10:31:57 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.788 10:31:57 -- event/event.sh@35 -- # sleep 3 00:04:27.732 [2024-12-03 10:31:57.980718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.732 [2024-12-03 10:31:58.111505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.732 [2024-12-03 10:31:58.111722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.732 [2024-12-03 10:31:58.215618] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:27.732 [2024-12-03 10:31:58.215664] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:30.281 spdk_app_start Round 1 00:04:30.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:30.281 10:32:00 -- event/event.sh@23 -- # for i in {0..2} 00:04:30.281 10:32:00 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:30.281 10:32:00 -- event/event.sh@25 -- # waitforlisten 57132 /var/tmp/spdk-nbd.sock 00:04:30.281 10:32:00 -- common/autotest_common.sh@829 -- # '[' -z 57132 ']' 00:04:30.281 10:32:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:30.281 10:32:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.281 10:32:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:30.281 10:32:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.281 10:32:00 -- common/autotest_common.sh@10 -- # set +x 00:04:30.281 10:32:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:30.281 10:32:00 -- common/autotest_common.sh@862 -- # return 0 00:04:30.281 10:32:00 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.281 Malloc0 00:04:30.281 10:32:00 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.542 Malloc1 00:04:30.542 10:32:00 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@12 -- # local i 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.542 10:32:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:30.803 /dev/nbd0 00:04:30.803 10:32:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:30.803 10:32:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:30.803 10:32:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:30.803 10:32:01 -- common/autotest_common.sh@867 -- # local i 00:04:30.803 10:32:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:30.803 10:32:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:30.803 10:32:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:30.803 10:32:01 -- common/autotest_common.sh@871 -- # break 00:04:30.803 10:32:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:30.803 10:32:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:30.803 10:32:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.803 1+0 records in 00:04:30.803 1+0 records out 00:04:30.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000153882 s, 26.6 MB/s 00:04:30.803 10:32:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.803 10:32:01 -- common/autotest_common.sh@884 -- # size=4096 00:04:30.803 10:32:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.803 10:32:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:30.803 10:32:01 -- common/autotest_common.sh@887 -- # return 0 00:04:30.803 10:32:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.803 10:32:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.803 10:32:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.803 /dev/nbd1 00:04:30.803 10:32:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.803 10:32:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.803 10:32:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:30.803 10:32:01 -- common/autotest_common.sh@867 -- # local i 00:04:30.803 10:32:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:30.803 10:32:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:30.803 10:32:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:30.803 10:32:01 -- common/autotest_common.sh@871 -- # break 00:04:30.803 10:32:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:30.803 10:32:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:30.803 10:32:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.803 1+0 records in 00:04:30.803 1+0 records out 00:04:30.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256317 s, 16.0 MB/s 00:04:30.803 10:32:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.803 10:32:01 -- common/autotest_common.sh@884 -- # size=4096 00:04:30.803 10:32:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.803 10:32:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:30.803 10:32:01 -- common/autotest_common.sh@887 -- # return 0 00:04:30.803 10:32:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.803 10:32:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.803 10:32:01 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.803 10:32:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.803 10:32:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.064 10:32:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:31.064 { 00:04:31.064 "nbd_device": "/dev/nbd0", 00:04:31.064 "bdev_name": "Malloc0" 00:04:31.064 }, 00:04:31.064 { 00:04:31.064 "nbd_device": "/dev/nbd1", 00:04:31.064 "bdev_name": "Malloc1" 00:04:31.064 } 00:04:31.064 ]' 00:04:31.064 10:32:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:31.064 { 00:04:31.064 "nbd_device": "/dev/nbd0", 00:04:31.064 "bdev_name": "Malloc0" 00:04:31.064 }, 00:04:31.064 { 00:04:31.064 "nbd_device": "/dev/nbd1", 00:04:31.064 "bdev_name": "Malloc1" 00:04:31.064 } 00:04:31.064 ]' 00:04:31.064 10:32:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.064 10:32:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:31.064 /dev/nbd1' 00:04:31.064 10:32:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:31.064 /dev/nbd1' 00:04:31.064 10:32:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.064 10:32:01 -- bdev/nbd_common.sh@65 -- # count=2 00:04:31.064 10:32:01 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:31.064 10:32:01 -- bdev/nbd_common.sh@95 -- # count=2 00:04:31.064 10:32:01 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:31.065 10:32:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:31.065 10:32:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.065 10:32:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.065 10:32:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:31.065 10:32:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:31.065 10:32:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:31.065 10:32:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:31.065 256+0 records in 00:04:31.065 256+0 records out 00:04:31.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467506 s, 224 MB/s 00:04:31.065 10:32:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.065 10:32:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:31.065 256+0 records in 00:04:31.065 256+0 records out 00:04:31.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014932 s, 70.2 MB/s 00:04:31.065 10:32:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.065 10:32:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:31.325 256+0 records in 00:04:31.325 256+0 records out 00:04:31.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149768 s, 70.0 MB/s 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@51 -- # local i 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@41 -- # break 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.325 10:32:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:31.586 10:32:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:31.586 10:32:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:31.586 10:32:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:31.586 10:32:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.586 10:32:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.586 10:32:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:31.586 10:32:02 -- bdev/nbd_common.sh@41 -- # break 00:04:31.586 10:32:02 -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.586 10:32:02 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.586 10:32:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.586 10:32:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.847 10:32:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.847 10:32:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.847 10:32:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.847 10:32:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.847 10:32:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.848 10:32:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.848 10:32:02 -- bdev/nbd_common.sh@65 -- # true 00:04:31.848 10:32:02 -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.848 10:32:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.848 10:32:02 -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.848 10:32:02 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.848 10:32:02 -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.848 10:32:02 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:32.108 10:32:02 -- event/event.sh@35 -- # sleep 3 00:04:32.680 [2024-12-03 10:32:03.193374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.941 [2024-12-03 10:32:03.335368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.941 [2024-12-03 10:32:03.335385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.941 [2024-12-03 10:32:03.439678] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:32.941 [2024-12-03 10:32:03.439725] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:35.485 spdk_app_start Round 2 00:04:35.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:35.485 10:32:05 -- event/event.sh@23 -- # for i in {0..2} 00:04:35.485 10:32:05 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:35.485 10:32:05 -- event/event.sh@25 -- # waitforlisten 57132 /var/tmp/spdk-nbd.sock 00:04:35.485 10:32:05 -- common/autotest_common.sh@829 -- # '[' -z 57132 ']' 00:04:35.485 10:32:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:35.485 10:32:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.485 10:32:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:35.485 10:32:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.485 10:32:05 -- common/autotest_common.sh@10 -- # set +x 00:04:35.485 10:32:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.485 10:32:05 -- common/autotest_common.sh@862 -- # return 0 00:04:35.485 10:32:05 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.485 Malloc0 00:04:35.485 10:32:05 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.745 Malloc1 00:04:35.745 10:32:06 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.745 10:32:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.745 10:32:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.745 10:32:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:35.745 10:32:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.745 10:32:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:35.746 10:32:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.746 10:32:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.746 10:32:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.746 10:32:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:35.746 10:32:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.746 10:32:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:35.746 10:32:06 -- bdev/nbd_common.sh@12 -- # local i 00:04:35.746 10:32:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:35.746 10:32:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.746 10:32:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:36.007 /dev/nbd0 00:04:36.007 10:32:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:36.007 10:32:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:36.007 10:32:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:36.007 10:32:06 -- common/autotest_common.sh@867 -- # local i 00:04:36.007 10:32:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:36.007 10:32:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:36.007 10:32:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:36.007 10:32:06 -- common/autotest_common.sh@871 -- # break 00:04:36.007 10:32:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:36.007 10:32:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:36.007 10:32:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.007 1+0 records in 00:04:36.007 1+0 records out 00:04:36.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227654 s, 18.0 MB/s 00:04:36.007 10:32:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:36.007 10:32:06 -- common/autotest_common.sh@884 -- # size=4096 00:04:36.007 10:32:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:36.007 10:32:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:36.007 10:32:06 -- common/autotest_common.sh@887 -- # return 0 00:04:36.007 10:32:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.007 10:32:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.007 10:32:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:36.267 /dev/nbd1 00:04:36.267 10:32:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:36.267 10:32:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:36.267 10:32:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:36.267 10:32:06 -- common/autotest_common.sh@867 -- # local i 00:04:36.267 10:32:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:36.267 10:32:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:36.267 10:32:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:36.267 10:32:06 -- common/autotest_common.sh@871 -- # break 00:04:36.267 10:32:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:36.267 10:32:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:36.268 10:32:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.268 1+0 records in 00:04:36.268 1+0 records out 00:04:36.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341518 s, 12.0 MB/s 00:04:36.268 10:32:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:36.268 10:32:06 -- common/autotest_common.sh@884 -- # size=4096 00:04:36.268 10:32:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:36.268 10:32:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:36.268 10:32:06 -- common/autotest_common.sh@887 -- # return 0 00:04:36.268 10:32:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.268 10:32:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.268 10:32:06 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.268 10:32:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.268 10:32:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.268 10:32:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:36.268 { 00:04:36.268 "nbd_device": "/dev/nbd0", 00:04:36.268 "bdev_name": "Malloc0" 00:04:36.268 }, 00:04:36.268 { 00:04:36.268 "nbd_device": "/dev/nbd1", 00:04:36.268 "bdev_name": "Malloc1" 00:04:36.268 } 00:04:36.268 ]' 00:04:36.268 10:32:06 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:36.268 { 00:04:36.268 "nbd_device": "/dev/nbd0", 00:04:36.268 "bdev_name": "Malloc0" 00:04:36.268 }, 00:04:36.268 { 00:04:36.268 "nbd_device": "/dev/nbd1", 00:04:36.268 "bdev_name": "Malloc1" 00:04:36.268 } 00:04:36.268 ]' 00:04:36.268 10:32:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.528 10:32:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:36.528 /dev/nbd1' 00:04:36.528 10:32:06 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:36.528 /dev/nbd1' 00:04:36.528 10:32:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.528 10:32:06 -- bdev/nbd_common.sh@65 -- # count=2 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@95 -- # count=2 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:36.529 256+0 records in 00:04:36.529 256+0 records out 00:04:36.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00672974 s, 156 MB/s 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:36.529 256+0 records in 00:04:36.529 256+0 records out 00:04:36.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016916 s, 62.0 MB/s 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:36.529 256+0 records in 00:04:36.529 256+0 records out 00:04:36.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173275 s, 60.5 MB/s 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@51 -- # local i 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.529 10:32:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:36.790 10:32:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:36.790 10:32:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:36.790 10:32:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:36.790 10:32:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.790 10:32:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.790 10:32:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:36.790 10:32:07 -- bdev/nbd_common.sh@41 -- # break 00:04:36.790 10:32:07 -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.790 10:32:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.790 10:32:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@41 -- # break 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@65 -- # true 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@65 -- # count=0 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@104 -- # count=0 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:37.051 10:32:07 -- bdev/nbd_common.sh@109 -- # return 0 00:04:37.051 10:32:07 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:37.311 10:32:07 -- event/event.sh@35 -- # sleep 3 00:04:38.252 [2024-12-03 10:32:08.551778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.252 [2024-12-03 10:32:08.689664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.252 [2024-12-03 10:32:08.689670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.252 [2024-12-03 10:32:08.796040] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:38.252 [2024-12-03 10:32:08.796100] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:40.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:40.791 10:32:10 -- event/event.sh@38 -- # waitforlisten 57132 /var/tmp/spdk-nbd.sock 00:04:40.791 10:32:10 -- common/autotest_common.sh@829 -- # '[' -z 57132 ']' 00:04:40.791 10:32:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:40.791 10:32:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.791 10:32:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:40.791 10:32:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.791 10:32:10 -- common/autotest_common.sh@10 -- # set +x 00:04:40.791 10:32:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.791 10:32:11 -- common/autotest_common.sh@862 -- # return 0 00:04:40.791 10:32:11 -- event/event.sh@39 -- # killprocess 57132 00:04:40.791 10:32:11 -- common/autotest_common.sh@936 -- # '[' -z 57132 ']' 00:04:40.791 10:32:11 -- common/autotest_common.sh@940 -- # kill -0 57132 00:04:40.791 10:32:11 -- common/autotest_common.sh@941 -- # uname 00:04:40.791 10:32:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:40.791 10:32:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57132 00:04:40.791 killing process with pid 57132 00:04:40.791 10:32:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:40.791 10:32:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:40.791 10:32:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57132' 00:04:40.791 10:32:11 -- common/autotest_common.sh@955 -- # kill 57132 00:04:40.791 10:32:11 -- common/autotest_common.sh@960 -- # wait 57132 00:04:41.359 spdk_app_start is called in Round 0. 00:04:41.359 Shutdown signal received, stop current app iteration 00:04:41.359 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:41.359 spdk_app_start is called in Round 1. 00:04:41.359 Shutdown signal received, stop current app iteration 00:04:41.359 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:41.359 spdk_app_start is called in Round 2. 00:04:41.359 Shutdown signal received, stop current app iteration 00:04:41.359 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:41.359 spdk_app_start is called in Round 3. 00:04:41.359 Shutdown signal received, stop current app iteration 00:04:41.359 ************************************ 00:04:41.359 END TEST app_repeat 00:04:41.359 ************************************ 00:04:41.359 10:32:11 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:41.359 10:32:11 -- event/event.sh@42 -- # return 0 00:04:41.359 00:04:41.359 real 0m17.317s 00:04:41.359 user 0m37.137s 00:04:41.359 sys 0m1.963s 00:04:41.359 10:32:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.359 10:32:11 -- common/autotest_common.sh@10 -- # set +x 00:04:41.359 10:32:11 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:41.359 10:32:11 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:41.359 10:32:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.359 10:32:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.359 10:32:11 -- common/autotest_common.sh@10 -- # set +x 00:04:41.359 ************************************ 00:04:41.359 START TEST cpu_locks 00:04:41.359 ************************************ 00:04:41.359 10:32:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:41.359 * Looking for test storage... 00:04:41.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:41.359 10:32:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:41.359 10:32:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:41.359 10:32:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:41.359 10:32:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:41.359 10:32:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:41.359 10:32:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:41.359 10:32:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:41.360 10:32:11 -- scripts/common.sh@335 -- # IFS=.-: 00:04:41.360 10:32:11 -- scripts/common.sh@335 -- # read -ra ver1 00:04:41.360 10:32:11 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.360 10:32:11 -- scripts/common.sh@336 -- # read -ra ver2 00:04:41.360 10:32:11 -- scripts/common.sh@337 -- # local 'op=<' 00:04:41.360 10:32:11 -- scripts/common.sh@339 -- # ver1_l=2 00:04:41.360 10:32:11 -- scripts/common.sh@340 -- # ver2_l=1 00:04:41.360 10:32:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:41.360 10:32:11 -- scripts/common.sh@343 -- # case "$op" in 00:04:41.360 10:32:11 -- scripts/common.sh@344 -- # : 1 00:04:41.360 10:32:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:41.360 10:32:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.360 10:32:11 -- scripts/common.sh@364 -- # decimal 1 00:04:41.360 10:32:11 -- scripts/common.sh@352 -- # local d=1 00:04:41.360 10:32:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.360 10:32:11 -- scripts/common.sh@354 -- # echo 1 00:04:41.360 10:32:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:41.360 10:32:11 -- scripts/common.sh@365 -- # decimal 2 00:04:41.360 10:32:11 -- scripts/common.sh@352 -- # local d=2 00:04:41.360 10:32:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.360 10:32:11 -- scripts/common.sh@354 -- # echo 2 00:04:41.360 10:32:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:41.360 10:32:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:41.360 10:32:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:41.360 10:32:11 -- scripts/common.sh@367 -- # return 0 00:04:41.360 10:32:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.360 10:32:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:41.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.360 --rc genhtml_branch_coverage=1 00:04:41.360 --rc genhtml_function_coverage=1 00:04:41.360 --rc genhtml_legend=1 00:04:41.360 --rc geninfo_all_blocks=1 00:04:41.360 --rc geninfo_unexecuted_blocks=1 00:04:41.360 00:04:41.360 ' 00:04:41.360 10:32:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:41.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.360 --rc genhtml_branch_coverage=1 00:04:41.360 --rc genhtml_function_coverage=1 00:04:41.360 --rc genhtml_legend=1 00:04:41.360 --rc geninfo_all_blocks=1 00:04:41.360 --rc geninfo_unexecuted_blocks=1 00:04:41.360 00:04:41.360 ' 00:04:41.360 10:32:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:41.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.360 --rc genhtml_branch_coverage=1 00:04:41.360 --rc genhtml_function_coverage=1 00:04:41.360 --rc genhtml_legend=1 00:04:41.360 --rc geninfo_all_blocks=1 00:04:41.360 --rc geninfo_unexecuted_blocks=1 00:04:41.360 00:04:41.360 ' 00:04:41.360 10:32:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:41.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.360 --rc genhtml_branch_coverage=1 00:04:41.360 --rc genhtml_function_coverage=1 00:04:41.360 --rc genhtml_legend=1 00:04:41.360 --rc geninfo_all_blocks=1 00:04:41.360 --rc geninfo_unexecuted_blocks=1 00:04:41.360 00:04:41.360 ' 00:04:41.360 10:32:11 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:41.360 10:32:11 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:41.360 10:32:11 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:41.360 10:32:11 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:41.360 10:32:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.360 10:32:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.360 10:32:11 -- common/autotest_common.sh@10 -- # set +x 00:04:41.360 ************************************ 00:04:41.360 START TEST default_locks 00:04:41.360 ************************************ 00:04:41.360 10:32:11 -- common/autotest_common.sh@1114 -- # default_locks 00:04:41.360 10:32:11 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57556 00:04:41.360 10:32:11 -- event/cpu_locks.sh@47 -- # waitforlisten 57556 00:04:41.360 10:32:11 -- common/autotest_common.sh@829 -- # '[' -z 57556 ']' 00:04:41.360 10:32:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.360 10:32:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.360 10:32:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.360 10:32:11 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.360 10:32:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.360 10:32:11 -- common/autotest_common.sh@10 -- # set +x 00:04:41.620 [2024-12-03 10:32:11.996524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:41.620 [2024-12-03 10:32:11.996745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57556 ] 00:04:41.620 [2024-12-03 10:32:12.145234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.880 [2024-12-03 10:32:12.285527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:41.880 [2024-12-03 10:32:12.285801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.446 10:32:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.446 10:32:12 -- common/autotest_common.sh@862 -- # return 0 00:04:42.446 10:32:12 -- event/cpu_locks.sh@49 -- # locks_exist 57556 00:04:42.446 10:32:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.446 10:32:12 -- event/cpu_locks.sh@22 -- # lslocks -p 57556 00:04:42.707 10:32:13 -- event/cpu_locks.sh@50 -- # killprocess 57556 00:04:42.707 10:32:13 -- common/autotest_common.sh@936 -- # '[' -z 57556 ']' 00:04:42.707 10:32:13 -- common/autotest_common.sh@940 -- # kill -0 57556 00:04:42.707 10:32:13 -- common/autotest_common.sh@941 -- # uname 00:04:42.707 10:32:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:42.707 10:32:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57556 00:04:42.707 killing process with pid 57556 00:04:42.707 10:32:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:42.707 10:32:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:42.707 10:32:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57556' 00:04:42.707 10:32:13 -- common/autotest_common.sh@955 -- # kill 57556 00:04:42.707 10:32:13 -- common/autotest_common.sh@960 -- # wait 57556 00:04:44.092 10:32:14 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57556 00:04:44.092 10:32:14 -- common/autotest_common.sh@650 -- # local es=0 00:04:44.092 10:32:14 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57556 00:04:44.092 10:32:14 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:44.092 10:32:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.092 10:32:14 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:44.092 10:32:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.092 10:32:14 -- common/autotest_common.sh@653 -- # waitforlisten 57556 00:04:44.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.092 ERROR: process (pid: 57556) is no longer running 00:04:44.092 10:32:14 -- common/autotest_common.sh@829 -- # '[' -z 57556 ']' 00:04:44.092 10:32:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.092 10:32:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.092 10:32:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.092 10:32:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.092 10:32:14 -- common/autotest_common.sh@10 -- # set +x 00:04:44.092 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57556) - No such process 00:04:44.092 10:32:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.092 10:32:14 -- common/autotest_common.sh@862 -- # return 1 00:04:44.092 10:32:14 -- common/autotest_common.sh@653 -- # es=1 00:04:44.092 10:32:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:44.092 10:32:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:44.092 10:32:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:44.092 10:32:14 -- event/cpu_locks.sh@54 -- # no_locks 00:04:44.092 10:32:14 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:44.092 10:32:14 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:44.092 10:32:14 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:44.092 00:04:44.092 real 0m2.372s 00:04:44.092 user 0m2.343s 00:04:44.092 sys 0m0.474s 00:04:44.092 10:32:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:44.092 10:32:14 -- common/autotest_common.sh@10 -- # set +x 00:04:44.092 ************************************ 00:04:44.092 END TEST default_locks 00:04:44.092 ************************************ 00:04:44.092 10:32:14 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:44.092 10:32:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.092 10:32:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.092 10:32:14 -- common/autotest_common.sh@10 -- # set +x 00:04:44.092 ************************************ 00:04:44.092 START TEST default_locks_via_rpc 00:04:44.092 ************************************ 00:04:44.092 10:32:14 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:04:44.092 10:32:14 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57615 00:04:44.092 10:32:14 -- event/cpu_locks.sh@63 -- # waitforlisten 57615 00:04:44.092 10:32:14 -- common/autotest_common.sh@829 -- # '[' -z 57615 ']' 00:04:44.092 10:32:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.092 10:32:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.092 10:32:14 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.092 10:32:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.092 10:32:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.092 10:32:14 -- common/autotest_common.sh@10 -- # set +x 00:04:44.092 [2024-12-03 10:32:14.413224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:44.092 [2024-12-03 10:32:14.413504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57615 ] 00:04:44.092 [2024-12-03 10:32:14.561659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.351 [2024-12-03 10:32:14.710697] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:44.351 [2024-12-03 10:32:14.710850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.609 10:32:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.609 10:32:15 -- common/autotest_common.sh@862 -- # return 0 00:04:44.609 10:32:15 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:44.609 10:32:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.609 10:32:15 -- common/autotest_common.sh@10 -- # set +x 00:04:44.609 10:32:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.609 10:32:15 -- event/cpu_locks.sh@67 -- # no_locks 00:04:44.609 10:32:15 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:44.609 10:32:15 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:44.609 10:32:15 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:44.609 10:32:15 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:44.609 10:32:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.609 10:32:15 -- common/autotest_common.sh@10 -- # set +x 00:04:44.609 10:32:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.609 10:32:15 -- event/cpu_locks.sh@71 -- # locks_exist 57615 00:04:44.609 10:32:15 -- event/cpu_locks.sh@22 -- # lslocks -p 57615 00:04:44.609 10:32:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.866 10:32:15 -- event/cpu_locks.sh@73 -- # killprocess 57615 00:04:44.866 10:32:15 -- common/autotest_common.sh@936 -- # '[' -z 57615 ']' 00:04:44.866 10:32:15 -- common/autotest_common.sh@940 -- # kill -0 57615 00:04:44.866 10:32:15 -- common/autotest_common.sh@941 -- # uname 00:04:44.866 10:32:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:44.866 10:32:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57615 00:04:44.866 killing process with pid 57615 00:04:44.866 10:32:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:44.866 10:32:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:44.866 10:32:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57615' 00:04:44.866 10:32:15 -- common/autotest_common.sh@955 -- # kill 57615 00:04:44.866 10:32:15 -- common/autotest_common.sh@960 -- # wait 57615 00:04:46.236 00:04:46.236 real 0m2.216s 00:04:46.236 user 0m2.178s 00:04:46.236 sys 0m0.406s 00:04:46.236 10:32:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.236 10:32:16 -- common/autotest_common.sh@10 -- # set +x 00:04:46.236 ************************************ 00:04:46.236 END TEST default_locks_via_rpc 00:04:46.236 ************************************ 00:04:46.236 10:32:16 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:46.236 10:32:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.236 10:32:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.236 10:32:16 -- common/autotest_common.sh@10 -- # set +x 00:04:46.236 ************************************ 00:04:46.236 START TEST non_locking_app_on_locked_coremask 00:04:46.236 ************************************ 00:04:46.236 10:32:16 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:04:46.236 10:32:16 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57667 00:04:46.236 10:32:16 -- event/cpu_locks.sh@81 -- # waitforlisten 57667 /var/tmp/spdk.sock 00:04:46.236 10:32:16 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.236 10:32:16 -- common/autotest_common.sh@829 -- # '[' -z 57667 ']' 00:04:46.236 10:32:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.236 10:32:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.236 10:32:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.236 10:32:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.236 10:32:16 -- common/autotest_common.sh@10 -- # set +x 00:04:46.236 [2024-12-03 10:32:16.682898] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:46.236 [2024-12-03 10:32:16.682988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57667 ] 00:04:46.236 [2024-12-03 10:32:16.825287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.495 [2024-12-03 10:32:16.976144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:46.495 [2024-12-03 10:32:16.976317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.062 10:32:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.062 10:32:17 -- common/autotest_common.sh@862 -- # return 0 00:04:47.062 10:32:17 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57683 00:04:47.062 10:32:17 -- event/cpu_locks.sh@85 -- # waitforlisten 57683 /var/tmp/spdk2.sock 00:04:47.062 10:32:17 -- common/autotest_common.sh@829 -- # '[' -z 57683 ']' 00:04:47.062 10:32:17 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:47.062 10:32:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.062 10:32:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.062 10:32:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.062 10:32:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.062 10:32:17 -- common/autotest_common.sh@10 -- # set +x 00:04:47.062 [2024-12-03 10:32:17.571430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:47.062 [2024-12-03 10:32:17.572103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57683 ] 00:04:47.320 [2024-12-03 10:32:17.717390] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.320 [2024-12-03 10:32:17.717429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.656 [2024-12-03 10:32:18.003760] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:47.656 [2024-12-03 10:32:18.003916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.589 10:32:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.589 10:32:19 -- common/autotest_common.sh@862 -- # return 0 00:04:48.589 10:32:19 -- event/cpu_locks.sh@87 -- # locks_exist 57667 00:04:48.589 10:32:19 -- event/cpu_locks.sh@22 -- # lslocks -p 57667 00:04:48.589 10:32:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.847 10:32:19 -- event/cpu_locks.sh@89 -- # killprocess 57667 00:04:48.847 10:32:19 -- common/autotest_common.sh@936 -- # '[' -z 57667 ']' 00:04:48.847 10:32:19 -- common/autotest_common.sh@940 -- # kill -0 57667 00:04:48.847 10:32:19 -- common/autotest_common.sh@941 -- # uname 00:04:48.847 10:32:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:48.847 10:32:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57667 00:04:48.847 killing process with pid 57667 00:04:48.847 10:32:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:48.847 10:32:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:48.847 10:32:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57667' 00:04:48.847 10:32:19 -- common/autotest_common.sh@955 -- # kill 57667 00:04:48.847 10:32:19 -- common/autotest_common.sh@960 -- # wait 57667 00:04:51.376 10:32:21 -- event/cpu_locks.sh@90 -- # killprocess 57683 00:04:51.376 10:32:21 -- common/autotest_common.sh@936 -- # '[' -z 57683 ']' 00:04:51.376 10:32:21 -- common/autotest_common.sh@940 -- # kill -0 57683 00:04:51.376 10:32:21 -- common/autotest_common.sh@941 -- # uname 00:04:51.376 10:32:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:51.376 10:32:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57683 00:04:51.376 killing process with pid 57683 00:04:51.376 10:32:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:51.377 10:32:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:51.377 10:32:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57683' 00:04:51.377 10:32:21 -- common/autotest_common.sh@955 -- # kill 57683 00:04:51.377 10:32:21 -- common/autotest_common.sh@960 -- # wait 57683 00:04:52.748 ************************************ 00:04:52.748 END TEST non_locking_app_on_locked_coremask 00:04:52.748 ************************************ 00:04:52.748 00:04:52.748 real 0m6.342s 00:04:52.748 user 0m6.728s 00:04:52.748 sys 0m0.807s 00:04:52.749 10:32:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.749 10:32:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.749 10:32:22 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:52.749 10:32:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.749 10:32:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.749 10:32:22 -- common/autotest_common.sh@10 -- # set +x 00:04:52.749 ************************************ 00:04:52.749 START TEST locking_app_on_unlocked_coremask 00:04:52.749 ************************************ 00:04:52.749 10:32:23 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:04:52.749 10:32:23 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57781 00:04:52.749 10:32:23 -- event/cpu_locks.sh@99 -- # waitforlisten 57781 /var/tmp/spdk.sock 00:04:52.749 10:32:23 -- common/autotest_common.sh@829 -- # '[' -z 57781 ']' 00:04:52.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.749 10:32:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.749 10:32:23 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:52.749 10:32:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.749 10:32:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.749 10:32:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.749 10:32:23 -- common/autotest_common.sh@10 -- # set +x 00:04:52.749 [2024-12-03 10:32:23.068116] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:52.749 [2024-12-03 10:32:23.068220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57781 ] 00:04:52.749 [2024-12-03 10:32:23.213753] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:52.749 [2024-12-03 10:32:23.213901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.749 [2024-12-03 10:32:23.358872] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.749 [2024-12-03 10:32:23.359033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:53.314 10:32:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.314 10:32:23 -- common/autotest_common.sh@862 -- # return 0 00:04:53.314 10:32:23 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57792 00:04:53.314 10:32:23 -- event/cpu_locks.sh@103 -- # waitforlisten 57792 /var/tmp/spdk2.sock 00:04:53.314 10:32:23 -- common/autotest_common.sh@829 -- # '[' -z 57792 ']' 00:04:53.314 10:32:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:53.314 10:32:23 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:53.314 10:32:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.314 10:32:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:53.314 10:32:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.314 10:32:23 -- common/autotest_common.sh@10 -- # set +x 00:04:53.571 [2024-12-03 10:32:23.946973] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:53.571 [2024-12-03 10:32:23.947149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57792 ] 00:04:53.571 [2024-12-03 10:32:24.093280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.829 [2024-12-03 10:32:24.389118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:53.829 [2024-12-03 10:32:24.389271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.199 10:32:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.199 10:32:25 -- common/autotest_common.sh@862 -- # return 0 00:04:55.199 10:32:25 -- event/cpu_locks.sh@105 -- # locks_exist 57792 00:04:55.199 10:32:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:55.199 10:32:25 -- event/cpu_locks.sh@22 -- # lslocks -p 57792 00:04:55.199 10:32:25 -- event/cpu_locks.sh@107 -- # killprocess 57781 00:04:55.199 10:32:25 -- common/autotest_common.sh@936 -- # '[' -z 57781 ']' 00:04:55.199 10:32:25 -- common/autotest_common.sh@940 -- # kill -0 57781 00:04:55.199 10:32:25 -- common/autotest_common.sh@941 -- # uname 00:04:55.199 10:32:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:55.199 10:32:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57781 00:04:55.199 killing process with pid 57781 00:04:55.199 10:32:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:55.199 10:32:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:55.199 10:32:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57781' 00:04:55.199 10:32:25 -- common/autotest_common.sh@955 -- # kill 57781 00:04:55.199 10:32:25 -- common/autotest_common.sh@960 -- # wait 57781 00:04:57.724 10:32:28 -- event/cpu_locks.sh@108 -- # killprocess 57792 00:04:57.724 10:32:28 -- common/autotest_common.sh@936 -- # '[' -z 57792 ']' 00:04:57.724 10:32:28 -- common/autotest_common.sh@940 -- # kill -0 57792 00:04:57.724 10:32:28 -- common/autotest_common.sh@941 -- # uname 00:04:57.724 10:32:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:57.724 10:32:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57792 00:04:57.724 killing process with pid 57792 00:04:57.724 10:32:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:57.724 10:32:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:57.724 10:32:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57792' 00:04:57.724 10:32:28 -- common/autotest_common.sh@955 -- # kill 57792 00:04:57.724 10:32:28 -- common/autotest_common.sh@960 -- # wait 57792 00:04:59.098 00:04:59.098 real 0m6.396s 00:04:59.098 user 0m6.765s 00:04:59.098 sys 0m0.841s 00:04:59.098 10:32:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.098 10:32:29 -- common/autotest_common.sh@10 -- # set +x 00:04:59.098 ************************************ 00:04:59.098 END TEST locking_app_on_unlocked_coremask 00:04:59.098 ************************************ 00:04:59.098 10:32:29 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:59.098 10:32:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.098 10:32:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.098 10:32:29 -- common/autotest_common.sh@10 -- # set +x 00:04:59.098 ************************************ 00:04:59.098 START TEST locking_app_on_locked_coremask 00:04:59.098 ************************************ 00:04:59.098 10:32:29 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:04:59.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.098 10:32:29 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57896 00:04:59.098 10:32:29 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.098 10:32:29 -- event/cpu_locks.sh@116 -- # waitforlisten 57896 /var/tmp/spdk.sock 00:04:59.098 10:32:29 -- common/autotest_common.sh@829 -- # '[' -z 57896 ']' 00:04:59.098 10:32:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.098 10:32:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.098 10:32:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.098 10:32:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.098 10:32:29 -- common/autotest_common.sh@10 -- # set +x 00:04:59.098 [2024-12-03 10:32:29.525045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:59.098 [2024-12-03 10:32:29.525443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57896 ] 00:04:59.098 [2024-12-03 10:32:29.675219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.356 [2024-12-03 10:32:29.846623] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:59.356 [2024-12-03 10:32:29.846964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.731 10:32:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.731 10:32:30 -- common/autotest_common.sh@862 -- # return 0 00:05:00.731 10:32:30 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57914 00:05:00.731 10:32:30 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57914 /var/tmp/spdk2.sock 00:05:00.731 10:32:30 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:00.731 10:32:30 -- common/autotest_common.sh@650 -- # local es=0 00:05:00.731 10:32:30 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57914 /var/tmp/spdk2.sock 00:05:00.731 10:32:30 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:00.731 10:32:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.731 10:32:30 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:00.731 10:32:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.731 10:32:30 -- common/autotest_common.sh@653 -- # waitforlisten 57914 /var/tmp/spdk2.sock 00:05:00.731 10:32:30 -- common/autotest_common.sh@829 -- # '[' -z 57914 ']' 00:05:00.731 10:32:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.731 10:32:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.731 10:32:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.731 10:32:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.731 10:32:30 -- common/autotest_common.sh@10 -- # set +x 00:05:00.731 [2024-12-03 10:32:31.046388] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:00.731 [2024-12-03 10:32:31.046684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57914 ] 00:05:00.731 [2024-12-03 10:32:31.200180] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57896 has claimed it. 00:05:00.731 [2024-12-03 10:32:31.200236] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:01.298 ERROR: process (pid: 57914) is no longer running 00:05:01.298 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57914) - No such process 00:05:01.298 10:32:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.298 10:32:31 -- common/autotest_common.sh@862 -- # return 1 00:05:01.298 10:32:31 -- common/autotest_common.sh@653 -- # es=1 00:05:01.298 10:32:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:01.298 10:32:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:01.298 10:32:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:01.298 10:32:31 -- event/cpu_locks.sh@122 -- # locks_exist 57896 00:05:01.298 10:32:31 -- event/cpu_locks.sh@22 -- # lslocks -p 57896 00:05:01.298 10:32:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.298 10:32:31 -- event/cpu_locks.sh@124 -- # killprocess 57896 00:05:01.298 10:32:31 -- common/autotest_common.sh@936 -- # '[' -z 57896 ']' 00:05:01.298 10:32:31 -- common/autotest_common.sh@940 -- # kill -0 57896 00:05:01.298 10:32:31 -- common/autotest_common.sh@941 -- # uname 00:05:01.298 10:32:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:01.298 10:32:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57896 00:05:01.298 10:32:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:01.298 10:32:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:01.298 killing process with pid 57896 00:05:01.298 10:32:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57896' 00:05:01.298 10:32:31 -- common/autotest_common.sh@955 -- # kill 57896 00:05:01.298 10:32:31 -- common/autotest_common.sh@960 -- # wait 57896 00:05:02.672 ************************************ 00:05:02.672 END TEST locking_app_on_locked_coremask 00:05:02.672 ************************************ 00:05:02.672 00:05:02.672 real 0m3.579s 00:05:02.672 user 0m3.873s 00:05:02.672 sys 0m0.546s 00:05:02.672 10:32:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.672 10:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:02.672 10:32:33 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:02.672 10:32:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.672 10:32:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.672 10:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:02.672 ************************************ 00:05:02.672 START TEST locking_overlapped_coremask 00:05:02.672 ************************************ 00:05:02.672 10:32:33 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:02.672 10:32:33 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57967 00:05:02.672 10:32:33 -- event/cpu_locks.sh@133 -- # waitforlisten 57967 /var/tmp/spdk.sock 00:05:02.672 10:32:33 -- common/autotest_common.sh@829 -- # '[' -z 57967 ']' 00:05:02.672 10:32:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.672 10:32:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.672 10:32:33 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:02.672 10:32:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.672 10:32:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.673 10:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:02.673 [2024-12-03 10:32:33.146579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:02.673 [2024-12-03 10:32:33.146858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57967 ] 00:05:02.931 [2024-12-03 10:32:33.296562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.931 [2024-12-03 10:32:33.495670] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:02.931 [2024-12-03 10:32:33.496110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.931 [2024-12-03 10:32:33.496302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.931 [2024-12-03 10:32:33.496404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.304 10:32:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.304 10:32:34 -- common/autotest_common.sh@862 -- # return 0 00:05:04.304 10:32:34 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:04.304 10:32:34 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57998 00:05:04.304 10:32:34 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57998 /var/tmp/spdk2.sock 00:05:04.304 10:32:34 -- common/autotest_common.sh@650 -- # local es=0 00:05:04.304 10:32:34 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57998 /var/tmp/spdk2.sock 00:05:04.304 10:32:34 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:04.304 10:32:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.304 10:32:34 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:04.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.304 10:32:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.304 10:32:34 -- common/autotest_common.sh@653 -- # waitforlisten 57998 /var/tmp/spdk2.sock 00:05:04.304 10:32:34 -- common/autotest_common.sh@829 -- # '[' -z 57998 ']' 00:05:04.304 10:32:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.304 10:32:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.304 10:32:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.304 10:32:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.304 10:32:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.304 [2024-12-03 10:32:34.695161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:04.304 [2024-12-03 10:32:34.695445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57998 ] 00:05:04.304 [2024-12-03 10:32:34.848113] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57967 has claimed it. 00:05:04.304 [2024-12-03 10:32:34.848170] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:04.885 ERROR: process (pid: 57998) is no longer running 00:05:04.885 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57998) - No such process 00:05:04.885 10:32:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.885 10:32:35 -- common/autotest_common.sh@862 -- # return 1 00:05:04.885 10:32:35 -- common/autotest_common.sh@653 -- # es=1 00:05:04.885 10:32:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:04.885 10:32:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:04.885 10:32:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:04.885 10:32:35 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:04.885 10:32:35 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:04.885 10:32:35 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:04.885 10:32:35 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:04.885 10:32:35 -- event/cpu_locks.sh@141 -- # killprocess 57967 00:05:04.885 10:32:35 -- common/autotest_common.sh@936 -- # '[' -z 57967 ']' 00:05:04.885 10:32:35 -- common/autotest_common.sh@940 -- # kill -0 57967 00:05:04.885 10:32:35 -- common/autotest_common.sh@941 -- # uname 00:05:04.885 10:32:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.885 10:32:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57967 00:05:04.885 killing process with pid 57967 00:05:04.885 10:32:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.885 10:32:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.885 10:32:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57967' 00:05:04.885 10:32:35 -- common/autotest_common.sh@955 -- # kill 57967 00:05:04.885 10:32:35 -- common/autotest_common.sh@960 -- # wait 57967 00:05:06.794 00:05:06.794 real 0m3.840s 00:05:06.794 user 0m10.337s 00:05:06.794 sys 0m0.427s 00:05:06.794 ************************************ 00:05:06.794 END TEST locking_overlapped_coremask 00:05:06.794 ************************************ 00:05:06.794 10:32:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.794 10:32:36 -- common/autotest_common.sh@10 -- # set +x 00:05:06.794 10:32:36 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:06.794 10:32:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.794 10:32:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.794 10:32:36 -- common/autotest_common.sh@10 -- # set +x 00:05:06.794 ************************************ 00:05:06.794 START TEST locking_overlapped_coremask_via_rpc 00:05:06.794 ************************************ 00:05:06.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.794 10:32:36 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:06.795 10:32:36 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58051 00:05:06.795 10:32:36 -- event/cpu_locks.sh@149 -- # waitforlisten 58051 /var/tmp/spdk.sock 00:05:06.795 10:32:36 -- common/autotest_common.sh@829 -- # '[' -z 58051 ']' 00:05:06.795 10:32:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.795 10:32:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.795 10:32:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.795 10:32:36 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:06.795 10:32:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.795 10:32:36 -- common/autotest_common.sh@10 -- # set +x 00:05:06.795 [2024-12-03 10:32:37.030019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:06.795 [2024-12-03 10:32:37.030141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58051 ] 00:05:06.795 [2024-12-03 10:32:37.179925] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.795 [2024-12-03 10:32:37.179980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:06.795 [2024-12-03 10:32:37.385639] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:06.795 [2024-12-03 10:32:37.385982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.795 [2024-12-03 10:32:37.386154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.795 [2024-12-03 10:32:37.386169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.172 10:32:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.172 10:32:38 -- common/autotest_common.sh@862 -- # return 0 00:05:08.172 10:32:38 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:08.172 10:32:38 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58076 00:05:08.172 10:32:38 -- event/cpu_locks.sh@153 -- # waitforlisten 58076 /var/tmp/spdk2.sock 00:05:08.172 10:32:38 -- common/autotest_common.sh@829 -- # '[' -z 58076 ']' 00:05:08.172 10:32:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.172 10:32:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.172 10:32:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.172 10:32:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.172 10:32:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.172 [2024-12-03 10:32:38.580724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:08.172 [2024-12-03 10:32:38.580829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58076 ] 00:05:08.172 [2024-12-03 10:32:38.728808] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:08.173 [2024-12-03 10:32:38.728851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:08.430 [2024-12-03 10:32:39.039746] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:08.430 [2024-12-03 10:32:39.040087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.687 [2024-12-03 10:32:39.043146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.687 [2024-12-03 10:32:39.043178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:09.692 10:32:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.692 10:32:40 -- common/autotest_common.sh@862 -- # return 0 00:05:09.692 10:32:40 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:09.692 10:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.692 10:32:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.692 10:32:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.692 10:32:40 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:09.692 10:32:40 -- common/autotest_common.sh@650 -- # local es=0 00:05:09.692 10:32:40 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:09.692 10:32:40 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:09.692 10:32:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.692 10:32:40 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:09.692 10:32:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.692 10:32:40 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:09.692 10:32:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.692 10:32:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.692 [2024-12-03 10:32:40.062235] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58051 has claimed it. 00:05:09.692 request: 00:05:09.692 { 00:05:09.692 "method": "framework_enable_cpumask_locks", 00:05:09.692 "req_id": 1 00:05:09.692 } 00:05:09.692 Got JSON-RPC error response 00:05:09.692 response: 00:05:09.692 { 00:05:09.692 "code": -32603, 00:05:09.692 "message": "Failed to claim CPU core: 2" 00:05:09.692 } 00:05:09.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.692 10:32:40 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:09.692 10:32:40 -- common/autotest_common.sh@653 -- # es=1 00:05:09.692 10:32:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:09.692 10:32:40 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:09.692 10:32:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:09.692 10:32:40 -- event/cpu_locks.sh@158 -- # waitforlisten 58051 /var/tmp/spdk.sock 00:05:09.692 10:32:40 -- common/autotest_common.sh@829 -- # '[' -z 58051 ']' 00:05:09.692 10:32:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.692 10:32:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.692 10:32:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.692 10:32:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.692 10:32:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.692 10:32:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.692 10:32:40 -- common/autotest_common.sh@862 -- # return 0 00:05:09.692 10:32:40 -- event/cpu_locks.sh@159 -- # waitforlisten 58076 /var/tmp/spdk2.sock 00:05:09.692 10:32:40 -- common/autotest_common.sh@829 -- # '[' -z 58076 ']' 00:05:09.692 10:32:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.692 10:32:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.692 10:32:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.692 10:32:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.692 10:32:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.949 ************************************ 00:05:09.949 END TEST locking_overlapped_coremask_via_rpc 00:05:09.949 ************************************ 00:05:09.949 10:32:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.949 10:32:40 -- common/autotest_common.sh@862 -- # return 0 00:05:09.949 10:32:40 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:09.949 10:32:40 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:09.949 10:32:40 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:09.949 10:32:40 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:09.949 00:05:09.949 real 0m3.468s 00:05:09.949 user 0m1.244s 00:05:09.949 sys 0m0.155s 00:05:09.949 10:32:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.949 10:32:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.949 10:32:40 -- event/cpu_locks.sh@174 -- # cleanup 00:05:09.949 10:32:40 -- event/cpu_locks.sh@15 -- # [[ -z 58051 ]] 00:05:09.949 10:32:40 -- event/cpu_locks.sh@15 -- # killprocess 58051 00:05:09.949 10:32:40 -- common/autotest_common.sh@936 -- # '[' -z 58051 ']' 00:05:09.949 10:32:40 -- common/autotest_common.sh@940 -- # kill -0 58051 00:05:09.949 10:32:40 -- common/autotest_common.sh@941 -- # uname 00:05:09.949 10:32:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:09.949 10:32:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58051 00:05:09.949 killing process with pid 58051 00:05:09.949 10:32:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:09.949 10:32:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:09.949 10:32:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58051' 00:05:09.949 10:32:40 -- common/autotest_common.sh@955 -- # kill 58051 00:05:09.949 10:32:40 -- common/autotest_common.sh@960 -- # wait 58051 00:05:11.845 10:32:41 -- event/cpu_locks.sh@16 -- # [[ -z 58076 ]] 00:05:11.845 10:32:41 -- event/cpu_locks.sh@16 -- # killprocess 58076 00:05:11.845 10:32:41 -- common/autotest_common.sh@936 -- # '[' -z 58076 ']' 00:05:11.845 10:32:41 -- common/autotest_common.sh@940 -- # kill -0 58076 00:05:11.845 10:32:41 -- common/autotest_common.sh@941 -- # uname 00:05:11.845 10:32:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:11.845 10:32:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58076 00:05:11.845 killing process with pid 58076 00:05:11.845 10:32:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:11.845 10:32:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:11.845 10:32:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58076' 00:05:11.845 10:32:42 -- common/autotest_common.sh@955 -- # kill 58076 00:05:11.845 10:32:42 -- common/autotest_common.sh@960 -- # wait 58076 00:05:12.779 10:32:43 -- event/cpu_locks.sh@18 -- # rm -f 00:05:12.779 10:32:43 -- event/cpu_locks.sh@1 -- # cleanup 00:05:12.779 10:32:43 -- event/cpu_locks.sh@15 -- # [[ -z 58051 ]] 00:05:12.779 10:32:43 -- event/cpu_locks.sh@15 -- # killprocess 58051 00:05:12.779 10:32:43 -- common/autotest_common.sh@936 -- # '[' -z 58051 ']' 00:05:12.779 10:32:43 -- common/autotest_common.sh@940 -- # kill -0 58051 00:05:12.779 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58051) - No such process 00:05:12.779 10:32:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58051 is not found' 00:05:12.779 Process with pid 58051 is not found 00:05:12.779 10:32:43 -- event/cpu_locks.sh@16 -- # [[ -z 58076 ]] 00:05:12.780 10:32:43 -- event/cpu_locks.sh@16 -- # killprocess 58076 00:05:12.780 10:32:43 -- common/autotest_common.sh@936 -- # '[' -z 58076 ']' 00:05:12.780 10:32:43 -- common/autotest_common.sh@940 -- # kill -0 58076 00:05:12.780 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58076) - No such process 00:05:12.780 Process with pid 58076 is not found 00:05:12.780 10:32:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58076 is not found' 00:05:12.780 10:32:43 -- event/cpu_locks.sh@18 -- # rm -f 00:05:12.780 ************************************ 00:05:12.780 END TEST cpu_locks 00:05:12.780 ************************************ 00:05:12.780 00:05:12.780 real 0m31.403s 00:05:12.780 user 0m55.936s 00:05:12.780 sys 0m4.489s 00:05:12.780 10:32:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.780 10:32:43 -- common/autotest_common.sh@10 -- # set +x 00:05:12.780 ************************************ 00:05:12.780 END TEST event 00:05:12.780 ************************************ 00:05:12.780 00:05:12.780 real 0m58.703s 00:05:12.780 user 1m48.589s 00:05:12.780 sys 0m7.228s 00:05:12.780 10:32:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.780 10:32:43 -- common/autotest_common.sh@10 -- # set +x 00:05:12.780 10:32:43 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:12.780 10:32:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.780 10:32:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.780 10:32:43 -- common/autotest_common.sh@10 -- # set +x 00:05:12.780 ************************************ 00:05:12.780 START TEST thread 00:05:12.780 ************************************ 00:05:12.780 10:32:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:12.780 * Looking for test storage... 00:05:12.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:12.780 10:32:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:12.780 10:32:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:12.780 10:32:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:12.780 10:32:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:12.780 10:32:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:12.780 10:32:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:12.780 10:32:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:12.780 10:32:43 -- scripts/common.sh@335 -- # IFS=.-: 00:05:12.780 10:32:43 -- scripts/common.sh@335 -- # read -ra ver1 00:05:12.780 10:32:43 -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.780 10:32:43 -- scripts/common.sh@336 -- # read -ra ver2 00:05:12.780 10:32:43 -- scripts/common.sh@337 -- # local 'op=<' 00:05:12.780 10:32:43 -- scripts/common.sh@339 -- # ver1_l=2 00:05:12.780 10:32:43 -- scripts/common.sh@340 -- # ver2_l=1 00:05:12.780 10:32:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:12.780 10:32:43 -- scripts/common.sh@343 -- # case "$op" in 00:05:12.780 10:32:43 -- scripts/common.sh@344 -- # : 1 00:05:12.780 10:32:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:12.780 10:32:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.780 10:32:43 -- scripts/common.sh@364 -- # decimal 1 00:05:12.780 10:32:43 -- scripts/common.sh@352 -- # local d=1 00:05:12.780 10:32:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.780 10:32:43 -- scripts/common.sh@354 -- # echo 1 00:05:12.780 10:32:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:12.780 10:32:43 -- scripts/common.sh@365 -- # decimal 2 00:05:12.780 10:32:43 -- scripts/common.sh@352 -- # local d=2 00:05:12.780 10:32:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.780 10:32:43 -- scripts/common.sh@354 -- # echo 2 00:05:12.780 10:32:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:12.780 10:32:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:12.780 10:32:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:12.780 10:32:43 -- scripts/common.sh@367 -- # return 0 00:05:12.780 10:32:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.780 10:32:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:12.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.780 --rc genhtml_branch_coverage=1 00:05:12.780 --rc genhtml_function_coverage=1 00:05:12.780 --rc genhtml_legend=1 00:05:12.780 --rc geninfo_all_blocks=1 00:05:12.780 --rc geninfo_unexecuted_blocks=1 00:05:12.780 00:05:12.780 ' 00:05:12.780 10:32:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:12.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.780 --rc genhtml_branch_coverage=1 00:05:12.780 --rc genhtml_function_coverage=1 00:05:12.780 --rc genhtml_legend=1 00:05:12.780 --rc geninfo_all_blocks=1 00:05:12.780 --rc geninfo_unexecuted_blocks=1 00:05:12.780 00:05:12.780 ' 00:05:12.780 10:32:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:12.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.780 --rc genhtml_branch_coverage=1 00:05:12.780 --rc genhtml_function_coverage=1 00:05:12.780 --rc genhtml_legend=1 00:05:12.780 --rc geninfo_all_blocks=1 00:05:12.780 --rc geninfo_unexecuted_blocks=1 00:05:12.780 00:05:12.780 ' 00:05:12.780 10:32:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:12.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.780 --rc genhtml_branch_coverage=1 00:05:12.780 --rc genhtml_function_coverage=1 00:05:12.780 --rc genhtml_legend=1 00:05:12.780 --rc geninfo_all_blocks=1 00:05:12.780 --rc geninfo_unexecuted_blocks=1 00:05:12.780 00:05:12.780 ' 00:05:12.780 10:32:43 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:12.780 10:32:43 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:12.780 10:32:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.780 10:32:43 -- common/autotest_common.sh@10 -- # set +x 00:05:12.780 ************************************ 00:05:12.780 START TEST thread_poller_perf 00:05:12.780 ************************************ 00:05:12.780 10:32:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.039 [2024-12-03 10:32:43.411485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:13.039 [2024-12-03 10:32:43.411656] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58232 ] 00:05:13.039 [2024-12-03 10:32:43.556084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.297 [2024-12-03 10:32:43.731864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.297 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:14.681 [2024-12-03T10:32:45.294Z] ====================================== 00:05:14.681 [2024-12-03T10:32:45.294Z] busy:2618197578 (cyc) 00:05:14.681 [2024-12-03T10:32:45.294Z] total_run_count: 294000 00:05:14.681 [2024-12-03T10:32:45.294Z] tsc_hz: 2600000000 (cyc) 00:05:14.681 [2024-12-03T10:32:45.294Z] ====================================== 00:05:14.681 [2024-12-03T10:32:45.294Z] poller_cost: 8905 (cyc), 3425 (nsec) 00:05:14.681 00:05:14.681 real 0m1.628s 00:05:14.681 user 0m1.441s 00:05:14.681 sys 0m0.076s 00:05:14.681 10:32:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.681 10:32:45 -- common/autotest_common.sh@10 -- # set +x 00:05:14.681 ************************************ 00:05:14.681 END TEST thread_poller_perf 00:05:14.681 ************************************ 00:05:14.681 10:32:45 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:14.681 10:32:45 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:14.681 10:32:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.681 10:32:45 -- common/autotest_common.sh@10 -- # set +x 00:05:14.681 ************************************ 00:05:14.681 START TEST thread_poller_perf 00:05:14.681 ************************************ 00:05:14.681 10:32:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:14.681 [2024-12-03 10:32:45.096573] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:14.681 [2024-12-03 10:32:45.096804] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58274 ] 00:05:14.681 [2024-12-03 10:32:45.240282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.941 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:14.941 [2024-12-03 10:32:45.424433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.311 [2024-12-03T10:32:46.924Z] ====================================== 00:05:16.311 [2024-12-03T10:32:46.924Z] busy:2604211798 (cyc) 00:05:16.311 [2024-12-03T10:32:46.924Z] total_run_count: 4277000 00:05:16.311 [2024-12-03T10:32:46.924Z] tsc_hz: 2600000000 (cyc) 00:05:16.311 [2024-12-03T10:32:46.924Z] ====================================== 00:05:16.311 [2024-12-03T10:32:46.924Z] poller_cost: 608 (cyc), 233 (nsec) 00:05:16.311 00:05:16.311 real 0m1.565s 00:05:16.311 user 0m1.381s 00:05:16.311 sys 0m0.076s 00:05:16.311 10:32:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.311 ************************************ 00:05:16.311 END TEST thread_poller_perf 00:05:16.311 ************************************ 00:05:16.311 10:32:46 -- common/autotest_common.sh@10 -- # set +x 00:05:16.311 10:32:46 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:16.311 ************************************ 00:05:16.311 END TEST thread 00:05:16.311 ************************************ 00:05:16.311 00:05:16.311 real 0m3.432s 00:05:16.311 user 0m2.921s 00:05:16.311 sys 0m0.267s 00:05:16.311 10:32:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.311 10:32:46 -- common/autotest_common.sh@10 -- # set +x 00:05:16.311 10:32:46 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:16.311 10:32:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.311 10:32:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.311 10:32:46 -- common/autotest_common.sh@10 -- # set +x 00:05:16.311 ************************************ 00:05:16.311 START TEST accel 00:05:16.311 ************************************ 00:05:16.311 10:32:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:16.311 * Looking for test storage... 00:05:16.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:16.311 10:32:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:16.311 10:32:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:16.311 10:32:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:16.311 10:32:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:16.311 10:32:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:16.311 10:32:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:16.311 10:32:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:16.311 10:32:46 -- scripts/common.sh@335 -- # IFS=.-: 00:05:16.311 10:32:46 -- scripts/common.sh@335 -- # read -ra ver1 00:05:16.312 10:32:46 -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.312 10:32:46 -- scripts/common.sh@336 -- # read -ra ver2 00:05:16.312 10:32:46 -- scripts/common.sh@337 -- # local 'op=<' 00:05:16.312 10:32:46 -- scripts/common.sh@339 -- # ver1_l=2 00:05:16.312 10:32:46 -- scripts/common.sh@340 -- # ver2_l=1 00:05:16.312 10:32:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:16.312 10:32:46 -- scripts/common.sh@343 -- # case "$op" in 00:05:16.312 10:32:46 -- scripts/common.sh@344 -- # : 1 00:05:16.312 10:32:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:16.312 10:32:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.312 10:32:46 -- scripts/common.sh@364 -- # decimal 1 00:05:16.312 10:32:46 -- scripts/common.sh@352 -- # local d=1 00:05:16.312 10:32:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.312 10:32:46 -- scripts/common.sh@354 -- # echo 1 00:05:16.312 10:32:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:16.312 10:32:46 -- scripts/common.sh@365 -- # decimal 2 00:05:16.312 10:32:46 -- scripts/common.sh@352 -- # local d=2 00:05:16.312 10:32:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.312 10:32:46 -- scripts/common.sh@354 -- # echo 2 00:05:16.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.312 10:32:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:16.312 10:32:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:16.312 10:32:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:16.312 10:32:46 -- scripts/common.sh@367 -- # return 0 00:05:16.312 10:32:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.312 10:32:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:16.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.312 --rc genhtml_branch_coverage=1 00:05:16.312 --rc genhtml_function_coverage=1 00:05:16.312 --rc genhtml_legend=1 00:05:16.312 --rc geninfo_all_blocks=1 00:05:16.312 --rc geninfo_unexecuted_blocks=1 00:05:16.312 00:05:16.312 ' 00:05:16.312 10:32:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:16.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.312 --rc genhtml_branch_coverage=1 00:05:16.312 --rc genhtml_function_coverage=1 00:05:16.312 --rc genhtml_legend=1 00:05:16.312 --rc geninfo_all_blocks=1 00:05:16.312 --rc geninfo_unexecuted_blocks=1 00:05:16.312 00:05:16.312 ' 00:05:16.312 10:32:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:16.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.312 --rc genhtml_branch_coverage=1 00:05:16.312 --rc genhtml_function_coverage=1 00:05:16.312 --rc genhtml_legend=1 00:05:16.312 --rc geninfo_all_blocks=1 00:05:16.312 --rc geninfo_unexecuted_blocks=1 00:05:16.312 00:05:16.312 ' 00:05:16.312 10:32:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:16.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.312 --rc genhtml_branch_coverage=1 00:05:16.312 --rc genhtml_function_coverage=1 00:05:16.312 --rc genhtml_legend=1 00:05:16.312 --rc geninfo_all_blocks=1 00:05:16.312 --rc geninfo_unexecuted_blocks=1 00:05:16.312 00:05:16.312 ' 00:05:16.312 10:32:46 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:16.312 10:32:46 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:16.312 10:32:46 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:16.312 10:32:46 -- accel/accel.sh@59 -- # spdk_tgt_pid=58362 00:05:16.312 10:32:46 -- accel/accel.sh@60 -- # waitforlisten 58362 00:05:16.312 10:32:46 -- common/autotest_common.sh@829 -- # '[' -z 58362 ']' 00:05:16.312 10:32:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.312 10:32:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.312 10:32:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.312 10:32:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.312 10:32:46 -- common/autotest_common.sh@10 -- # set +x 00:05:16.312 10:32:46 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:16.312 10:32:46 -- accel/accel.sh@58 -- # build_accel_config 00:05:16.312 10:32:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:16.312 10:32:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.312 10:32:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.312 10:32:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:16.312 10:32:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:16.312 10:32:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:16.312 10:32:46 -- accel/accel.sh@42 -- # jq -r . 00:05:16.571 [2024-12-03 10:32:46.941489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:16.571 [2024-12-03 10:32:46.941603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58362 ] 00:05:16.571 [2024-12-03 10:32:47.089417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.828 [2024-12-03 10:32:47.238407] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:16.828 [2024-12-03 10:32:47.238577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.395 10:32:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.395 10:32:47 -- common/autotest_common.sh@862 -- # return 0 00:05:17.395 10:32:47 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:17.395 10:32:47 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:17.395 10:32:47 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:17.395 10:32:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.395 10:32:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.395 10:32:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.395 10:32:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # IFS== 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:17.395 10:32:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:17.395 10:32:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # IFS== 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:17.395 10:32:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:17.395 10:32:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # IFS== 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:17.395 10:32:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:17.395 10:32:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # IFS== 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:17.395 10:32:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:17.395 10:32:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # IFS== 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:17.395 10:32:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:17.395 10:32:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # IFS== 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:17.395 10:32:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:17.395 10:32:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # IFS== 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:17.395 10:32:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:17.395 10:32:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # IFS== 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:17.395 10:32:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:17.395 10:32:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # IFS== 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:17.395 10:32:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:17.395 10:32:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # IFS== 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:17.395 10:32:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:17.395 10:32:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # IFS== 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:17.395 10:32:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:17.395 10:32:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # IFS== 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:17.395 10:32:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:17.395 10:32:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # IFS== 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:17.395 10:32:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:17.395 10:32:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # IFS== 00:05:17.395 10:32:47 -- accel/accel.sh@64 -- # read -r opc module 00:05:17.395 10:32:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:17.395 10:32:47 -- accel/accel.sh@67 -- # killprocess 58362 00:05:17.395 10:32:47 -- common/autotest_common.sh@936 -- # '[' -z 58362 ']' 00:05:17.395 10:32:47 -- common/autotest_common.sh@940 -- # kill -0 58362 00:05:17.395 10:32:47 -- common/autotest_common.sh@941 -- # uname 00:05:17.395 10:32:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:17.395 10:32:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58362 00:05:17.395 killing process with pid 58362 00:05:17.395 10:32:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:17.395 10:32:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:17.395 10:32:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58362' 00:05:17.395 10:32:47 -- common/autotest_common.sh@955 -- # kill 58362 00:05:17.395 10:32:47 -- common/autotest_common.sh@960 -- # wait 58362 00:05:18.770 10:32:49 -- accel/accel.sh@68 -- # trap - ERR 00:05:18.770 10:32:49 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:18.770 10:32:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:18.770 10:32:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.770 10:32:49 -- common/autotest_common.sh@10 -- # set +x 00:05:18.770 10:32:49 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:05:18.770 10:32:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:18.770 10:32:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.770 10:32:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:18.770 10:32:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.770 10:32:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.770 10:32:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:18.770 10:32:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:18.770 10:32:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:18.770 10:32:49 -- accel/accel.sh@42 -- # jq -r . 00:05:18.770 10:32:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.770 10:32:49 -- common/autotest_common.sh@10 -- # set +x 00:05:18.770 10:32:49 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:18.770 10:32:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:18.770 10:32:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.770 10:32:49 -- common/autotest_common.sh@10 -- # set +x 00:05:18.770 ************************************ 00:05:18.770 START TEST accel_missing_filename 00:05:18.770 ************************************ 00:05:18.770 10:32:49 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:05:18.770 10:32:49 -- common/autotest_common.sh@650 -- # local es=0 00:05:18.770 10:32:49 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:18.770 10:32:49 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:18.770 10:32:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.770 10:32:49 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:18.770 10:32:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.770 10:32:49 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:05:18.770 10:32:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:18.770 10:32:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.770 10:32:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:18.770 10:32:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.770 10:32:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.770 10:32:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:18.770 10:32:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:18.770 10:32:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:18.770 10:32:49 -- accel/accel.sh@42 -- # jq -r . 00:05:18.770 [2024-12-03 10:32:49.199521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:18.770 [2024-12-03 10:32:49.199609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58421 ] 00:05:18.770 [2024-12-03 10:32:49.336387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.029 [2024-12-03 10:32:49.488429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.029 [2024-12-03 10:32:49.599987] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:19.286 [2024-12-03 10:32:49.864533] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:19.544 A filename is required. 00:05:19.544 10:32:50 -- common/autotest_common.sh@653 -- # es=234 00:05:19.544 10:32:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.544 10:32:50 -- common/autotest_common.sh@662 -- # es=106 00:05:19.544 10:32:50 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:19.544 10:32:50 -- common/autotest_common.sh@670 -- # es=1 00:05:19.544 10:32:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.544 00:05:19.544 real 0m0.904s 00:05:19.544 user 0m0.720s 00:05:19.544 sys 0m0.106s 00:05:19.544 10:32:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.544 ************************************ 00:05:19.544 END TEST accel_missing_filename 00:05:19.544 ************************************ 00:05:19.544 10:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:19.544 10:32:50 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:19.544 10:32:50 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:19.544 10:32:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.544 10:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:19.544 ************************************ 00:05:19.544 START TEST accel_compress_verify 00:05:19.544 ************************************ 00:05:19.544 10:32:50 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:19.544 10:32:50 -- common/autotest_common.sh@650 -- # local es=0 00:05:19.544 10:32:50 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:19.544 10:32:50 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:19.544 10:32:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.544 10:32:50 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:19.544 10:32:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.544 10:32:50 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:19.544 10:32:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:19.544 10:32:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.544 10:32:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:19.544 10:32:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.544 10:32:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.544 10:32:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:19.544 10:32:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:19.544 10:32:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:19.544 10:32:50 -- accel/accel.sh@42 -- # jq -r . 00:05:19.544 [2024-12-03 10:32:50.148158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:19.544 [2024-12-03 10:32:50.148378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58452 ] 00:05:19.801 [2024-12-03 10:32:50.293899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.058 [2024-12-03 10:32:50.482749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.058 [2024-12-03 10:32:50.597586] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:20.315 [2024-12-03 10:32:50.858709] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:20.574 00:05:20.574 Compression does not support the verify option, aborting. 00:05:20.574 10:32:51 -- common/autotest_common.sh@653 -- # es=161 00:05:20.574 10:32:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:20.574 10:32:51 -- common/autotest_common.sh@662 -- # es=33 00:05:20.574 10:32:51 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:20.574 10:32:51 -- common/autotest_common.sh@670 -- # es=1 00:05:20.574 10:32:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:20.574 00:05:20.574 real 0m0.960s 00:05:20.574 user 0m0.755s 00:05:20.574 sys 0m0.126s 00:05:20.574 10:32:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.574 ************************************ 00:05:20.574 END TEST accel_compress_verify 00:05:20.574 ************************************ 00:05:20.574 10:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.574 10:32:51 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:20.574 10:32:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:20.574 10:32:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.574 10:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.574 ************************************ 00:05:20.574 START TEST accel_wrong_workload 00:05:20.574 ************************************ 00:05:20.574 10:32:51 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:05:20.574 10:32:51 -- common/autotest_common.sh@650 -- # local es=0 00:05:20.574 10:32:51 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:20.574 10:32:51 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:20.574 10:32:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.574 10:32:51 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:20.574 10:32:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.574 10:32:51 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:05:20.574 10:32:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:20.574 10:32:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.574 10:32:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:20.574 10:32:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.574 10:32:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.574 10:32:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:20.574 10:32:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:20.574 10:32:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:20.574 10:32:51 -- accel/accel.sh@42 -- # jq -r . 00:05:20.574 Unsupported workload type: foobar 00:05:20.574 [2024-12-03 10:32:51.149833] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:20.574 accel_perf options: 00:05:20.574 [-h help message] 00:05:20.574 [-q queue depth per core] 00:05:20.574 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:20.574 [-T number of threads per core 00:05:20.574 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:20.574 [-t time in seconds] 00:05:20.574 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:20.574 [ dif_verify, , dif_generate, dif_generate_copy 00:05:20.574 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:20.574 [-l for compress/decompress workloads, name of uncompressed input file 00:05:20.574 [-S for crc32c workload, use this seed value (default 0) 00:05:20.574 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:20.574 [-f for fill workload, use this BYTE value (default 255) 00:05:20.574 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:20.574 [-y verify result if this switch is on] 00:05:20.574 [-a tasks to allocate per core (default: same value as -q)] 00:05:20.574 Can be used to spread operations across a wider range of memory. 00:05:20.574 ************************************ 00:05:20.574 END TEST accel_wrong_workload 00:05:20.574 ************************************ 00:05:20.574 10:32:51 -- common/autotest_common.sh@653 -- # es=1 00:05:20.574 10:32:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:20.574 10:32:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:20.574 10:32:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:20.574 00:05:20.574 real 0m0.047s 00:05:20.574 user 0m0.049s 00:05:20.574 sys 0m0.026s 00:05:20.574 10:32:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.574 10:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.832 10:32:51 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:20.832 10:32:51 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:20.832 10:32:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.832 10:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.832 ************************************ 00:05:20.832 START TEST accel_negative_buffers 00:05:20.832 ************************************ 00:05:20.832 10:32:51 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:20.832 10:32:51 -- common/autotest_common.sh@650 -- # local es=0 00:05:20.832 10:32:51 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:20.832 10:32:51 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:20.832 10:32:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.832 10:32:51 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:20.832 10:32:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.832 10:32:51 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:05:20.832 10:32:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:20.832 10:32:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.832 10:32:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:20.832 10:32:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.832 10:32:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.832 10:32:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:20.832 10:32:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:20.832 10:32:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:20.832 10:32:51 -- accel/accel.sh@42 -- # jq -r . 00:05:20.832 -x option must be non-negative. 00:05:20.832 [2024-12-03 10:32:51.233558] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:20.832 accel_perf options: 00:05:20.832 [-h help message] 00:05:20.832 [-q queue depth per core] 00:05:20.832 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:20.832 [-T number of threads per core 00:05:20.832 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:20.832 [-t time in seconds] 00:05:20.832 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:20.832 [ dif_verify, , dif_generate, dif_generate_copy 00:05:20.832 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:20.832 [-l for compress/decompress workloads, name of uncompressed input file 00:05:20.832 [-S for crc32c workload, use this seed value (default 0) 00:05:20.832 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:20.832 [-f for fill workload, use this BYTE value (default 255) 00:05:20.832 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:20.832 [-y verify result if this switch is on] 00:05:20.832 [-a tasks to allocate per core (default: same value as -q)] 00:05:20.832 Can be used to spread operations across a wider range of memory. 00:05:20.832 10:32:51 -- common/autotest_common.sh@653 -- # es=1 00:05:20.832 10:32:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:20.832 10:32:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:20.832 10:32:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:20.832 00:05:20.832 real 0m0.053s 00:05:20.832 user 0m0.052s 00:05:20.832 sys 0m0.032s 00:05:20.832 10:32:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.832 ************************************ 00:05:20.832 END TEST accel_negative_buffers 00:05:20.832 ************************************ 00:05:20.832 10:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.832 10:32:51 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:20.832 10:32:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:20.832 10:32:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.832 10:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.832 ************************************ 00:05:20.832 START TEST accel_crc32c 00:05:20.832 ************************************ 00:05:20.832 10:32:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:20.832 10:32:51 -- accel/accel.sh@16 -- # local accel_opc 00:05:20.832 10:32:51 -- accel/accel.sh@17 -- # local accel_module 00:05:20.832 10:32:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:20.832 10:32:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:20.832 10:32:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.832 10:32:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:20.832 10:32:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.832 10:32:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.832 10:32:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:20.832 10:32:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:20.832 10:32:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:20.832 10:32:51 -- accel/accel.sh@42 -- # jq -r . 00:05:20.832 [2024-12-03 10:32:51.323953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:20.832 [2024-12-03 10:32:51.324035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58519 ] 00:05:21.091 [2024-12-03 10:32:51.462420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.091 [2024-12-03 10:32:51.605654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.990 10:32:53 -- accel/accel.sh@18 -- # out=' 00:05:22.990 SPDK Configuration: 00:05:22.990 Core mask: 0x1 00:05:22.990 00:05:22.990 Accel Perf Configuration: 00:05:22.990 Workload Type: crc32c 00:05:22.990 CRC-32C seed: 32 00:05:22.990 Transfer size: 4096 bytes 00:05:22.990 Vector count 1 00:05:22.990 Module: software 00:05:22.990 Queue depth: 32 00:05:22.990 Allocate depth: 32 00:05:22.990 # threads/core: 1 00:05:22.990 Run time: 1 seconds 00:05:22.990 Verify: Yes 00:05:22.990 00:05:22.990 Running for 1 seconds... 00:05:22.990 00:05:22.990 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:22.990 ------------------------------------------------------------------------------------ 00:05:22.990 0,0 593408/s 2318 MiB/s 0 0 00:05:22.990 ==================================================================================== 00:05:22.990 Total 593408/s 2318 MiB/s 0 0' 00:05:22.990 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:22.990 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:22.990 10:32:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:22.990 10:32:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:22.990 10:32:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.990 10:32:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.990 10:32:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.990 10:32:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.990 10:32:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.990 10:32:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.990 10:32:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.990 10:32:53 -- accel/accel.sh@42 -- # jq -r . 00:05:22.990 [2024-12-03 10:32:53.233725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.990 [2024-12-03 10:32:53.233830] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58545 ] 00:05:22.990 [2024-12-03 10:32:53.382090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.990 [2024-12-03 10:32:53.531505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.250 10:32:53 -- accel/accel.sh@21 -- # val= 00:05:23.250 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.250 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.250 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.250 10:32:53 -- accel/accel.sh@21 -- # val= 00:05:23.250 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.250 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.250 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.250 10:32:53 -- accel/accel.sh@21 -- # val=0x1 00:05:23.250 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.250 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.250 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.251 10:32:53 -- accel/accel.sh@21 -- # val= 00:05:23.251 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.251 10:32:53 -- accel/accel.sh@21 -- # val= 00:05:23.251 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.251 10:32:53 -- accel/accel.sh@21 -- # val=crc32c 00:05:23.251 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.251 10:32:53 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.251 10:32:53 -- accel/accel.sh@21 -- # val=32 00:05:23.251 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.251 10:32:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:23.251 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.251 10:32:53 -- accel/accel.sh@21 -- # val= 00:05:23.251 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.251 10:32:53 -- accel/accel.sh@21 -- # val=software 00:05:23.251 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.251 10:32:53 -- accel/accel.sh@23 -- # accel_module=software 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.251 10:32:53 -- accel/accel.sh@21 -- # val=32 00:05:23.251 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.251 10:32:53 -- accel/accel.sh@21 -- # val=32 00:05:23.251 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.251 10:32:53 -- accel/accel.sh@21 -- # val=1 00:05:23.251 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.251 10:32:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:23.251 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.251 10:32:53 -- accel/accel.sh@21 -- # val=Yes 00:05:23.251 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.251 10:32:53 -- accel/accel.sh@21 -- # val= 00:05:23.251 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:23.251 10:32:53 -- accel/accel.sh@21 -- # val= 00:05:23.251 10:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:23.251 10:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:24.715 10:32:55 -- accel/accel.sh@21 -- # val= 00:05:24.715 10:32:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.715 10:32:55 -- accel/accel.sh@20 -- # IFS=: 00:05:24.715 10:32:55 -- accel/accel.sh@20 -- # read -r var val 00:05:24.715 10:32:55 -- accel/accel.sh@21 -- # val= 00:05:24.715 10:32:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.715 10:32:55 -- accel/accel.sh@20 -- # IFS=: 00:05:24.715 10:32:55 -- accel/accel.sh@20 -- # read -r var val 00:05:24.715 10:32:55 -- accel/accel.sh@21 -- # val= 00:05:24.715 10:32:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.715 10:32:55 -- accel/accel.sh@20 -- # IFS=: 00:05:24.715 10:32:55 -- accel/accel.sh@20 -- # read -r var val 00:05:24.715 10:32:55 -- accel/accel.sh@21 -- # val= 00:05:24.715 10:32:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.715 10:32:55 -- accel/accel.sh@20 -- # IFS=: 00:05:24.715 10:32:55 -- accel/accel.sh@20 -- # read -r var val 00:05:24.715 10:32:55 -- accel/accel.sh@21 -- # val= 00:05:24.715 10:32:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.715 10:32:55 -- accel/accel.sh@20 -- # IFS=: 00:05:24.715 10:32:55 -- accel/accel.sh@20 -- # read -r var val 00:05:24.715 10:32:55 -- accel/accel.sh@21 -- # val= 00:05:24.715 10:32:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.715 10:32:55 -- accel/accel.sh@20 -- # IFS=: 00:05:24.715 10:32:55 -- accel/accel.sh@20 -- # read -r var val 00:05:24.715 10:32:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:24.715 10:32:55 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:24.715 10:32:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.715 00:05:24.715 real 0m3.837s 00:05:24.715 user 0m3.402s 00:05:24.715 sys 0m0.228s 00:05:24.715 10:32:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.715 ************************************ 00:05:24.715 END TEST accel_crc32c 00:05:24.715 ************************************ 00:05:24.715 10:32:55 -- common/autotest_common.sh@10 -- # set +x 00:05:24.715 10:32:55 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:24.715 10:32:55 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:24.715 10:32:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.715 10:32:55 -- common/autotest_common.sh@10 -- # set +x 00:05:24.715 ************************************ 00:05:24.715 START TEST accel_crc32c_C2 00:05:24.715 ************************************ 00:05:24.715 10:32:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:24.715 10:32:55 -- accel/accel.sh@16 -- # local accel_opc 00:05:24.715 10:32:55 -- accel/accel.sh@17 -- # local accel_module 00:05:24.715 10:32:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:24.715 10:32:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:24.715 10:32:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:24.715 10:32:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:24.715 10:32:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.715 10:32:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.715 10:32:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:24.715 10:32:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:24.715 10:32:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:24.715 10:32:55 -- accel/accel.sh@42 -- # jq -r . 00:05:24.715 [2024-12-03 10:32:55.236255] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:24.715 [2024-12-03 10:32:55.236355] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58586 ] 00:05:24.973 [2024-12-03 10:32:55.384151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.973 [2024-12-03 10:32:55.542109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.872 10:32:57 -- accel/accel.sh@18 -- # out=' 00:05:26.872 SPDK Configuration: 00:05:26.872 Core mask: 0x1 00:05:26.872 00:05:26.872 Accel Perf Configuration: 00:05:26.872 Workload Type: crc32c 00:05:26.872 CRC-32C seed: 0 00:05:26.872 Transfer size: 4096 bytes 00:05:26.872 Vector count 2 00:05:26.872 Module: software 00:05:26.872 Queue depth: 32 00:05:26.872 Allocate depth: 32 00:05:26.872 # threads/core: 1 00:05:26.872 Run time: 1 seconds 00:05:26.872 Verify: Yes 00:05:26.872 00:05:26.872 Running for 1 seconds... 00:05:26.872 00:05:26.872 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:26.873 ------------------------------------------------------------------------------------ 00:05:26.873 0,0 486624/s 3801 MiB/s 0 0 00:05:26.873 ==================================================================================== 00:05:26.873 Total 486624/s 1900 MiB/s 0 0' 00:05:26.873 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:26.873 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:26.873 10:32:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:26.873 10:32:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:26.873 10:32:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.873 10:32:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:26.873 10:32:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.873 10:32:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.873 10:32:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:26.873 10:32:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:26.873 10:32:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:26.873 10:32:57 -- accel/accel.sh@42 -- # jq -r . 00:05:26.873 [2024-12-03 10:32:57.176914] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.873 [2024-12-03 10:32:57.177192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58612 ] 00:05:26.873 [2024-12-03 10:32:57.325252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.873 [2024-12-03 10:32:57.482506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.131 10:32:57 -- accel/accel.sh@21 -- # val= 00:05:27.131 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.131 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.131 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.131 10:32:57 -- accel/accel.sh@21 -- # val= 00:05:27.131 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.131 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.131 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.131 10:32:57 -- accel/accel.sh@21 -- # val=0x1 00:05:27.131 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.131 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.131 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.131 10:32:57 -- accel/accel.sh@21 -- # val= 00:05:27.131 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.131 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.132 10:32:57 -- accel/accel.sh@21 -- # val= 00:05:27.132 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.132 10:32:57 -- accel/accel.sh@21 -- # val=crc32c 00:05:27.132 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.132 10:32:57 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.132 10:32:57 -- accel/accel.sh@21 -- # val=0 00:05:27.132 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.132 10:32:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:27.132 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.132 10:32:57 -- accel/accel.sh@21 -- # val= 00:05:27.132 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.132 10:32:57 -- accel/accel.sh@21 -- # val=software 00:05:27.132 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.132 10:32:57 -- accel/accel.sh@23 -- # accel_module=software 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.132 10:32:57 -- accel/accel.sh@21 -- # val=32 00:05:27.132 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.132 10:32:57 -- accel/accel.sh@21 -- # val=32 00:05:27.132 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.132 10:32:57 -- accel/accel.sh@21 -- # val=1 00:05:27.132 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.132 10:32:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:27.132 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.132 10:32:57 -- accel/accel.sh@21 -- # val=Yes 00:05:27.132 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.132 10:32:57 -- accel/accel.sh@21 -- # val= 00:05:27.132 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:27.132 10:32:57 -- accel/accel.sh@21 -- # val= 00:05:27.132 10:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:27.132 10:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:28.518 10:32:59 -- accel/accel.sh@21 -- # val= 00:05:28.518 10:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.518 10:32:59 -- accel/accel.sh@20 -- # IFS=: 00:05:28.518 10:32:59 -- accel/accel.sh@20 -- # read -r var val 00:05:28.518 10:32:59 -- accel/accel.sh@21 -- # val= 00:05:28.518 10:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.518 10:32:59 -- accel/accel.sh@20 -- # IFS=: 00:05:28.518 10:32:59 -- accel/accel.sh@20 -- # read -r var val 00:05:28.518 10:32:59 -- accel/accel.sh@21 -- # val= 00:05:28.518 10:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.518 10:32:59 -- accel/accel.sh@20 -- # IFS=: 00:05:28.518 10:32:59 -- accel/accel.sh@20 -- # read -r var val 00:05:28.518 10:32:59 -- accel/accel.sh@21 -- # val= 00:05:28.518 10:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.518 10:32:59 -- accel/accel.sh@20 -- # IFS=: 00:05:28.518 10:32:59 -- accel/accel.sh@20 -- # read -r var val 00:05:28.518 10:32:59 -- accel/accel.sh@21 -- # val= 00:05:28.518 10:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.518 10:32:59 -- accel/accel.sh@20 -- # IFS=: 00:05:28.518 10:32:59 -- accel/accel.sh@20 -- # read -r var val 00:05:28.518 10:32:59 -- accel/accel.sh@21 -- # val= 00:05:28.518 10:32:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.518 10:32:59 -- accel/accel.sh@20 -- # IFS=: 00:05:28.518 10:32:59 -- accel/accel.sh@20 -- # read -r var val 00:05:28.518 10:32:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:28.518 10:32:59 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:28.518 10:32:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.518 00:05:28.518 real 0m3.905s 00:05:28.518 user 0m3.458s 00:05:28.518 sys 0m0.243s 00:05:28.518 10:32:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.518 ************************************ 00:05:28.518 END TEST accel_crc32c_C2 00:05:28.518 ************************************ 00:05:28.518 10:32:59 -- common/autotest_common.sh@10 -- # set +x 00:05:28.782 10:32:59 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:28.782 10:32:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:28.782 10:32:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.782 10:32:59 -- common/autotest_common.sh@10 -- # set +x 00:05:28.782 ************************************ 00:05:28.782 START TEST accel_copy 00:05:28.782 ************************************ 00:05:28.782 10:32:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:05:28.782 10:32:59 -- accel/accel.sh@16 -- # local accel_opc 00:05:28.782 10:32:59 -- accel/accel.sh@17 -- # local accel_module 00:05:28.782 10:32:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:28.782 10:32:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:28.782 10:32:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.782 10:32:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:28.782 10:32:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.782 10:32:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.782 10:32:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:28.782 10:32:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:28.782 10:32:59 -- accel/accel.sh@41 -- # local IFS=, 00:05:28.782 10:32:59 -- accel/accel.sh@42 -- # jq -r . 00:05:28.782 [2024-12-03 10:32:59.181044] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:28.782 [2024-12-03 10:32:59.181203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58649 ] 00:05:28.782 [2024-12-03 10:32:59.327051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.040 [2024-12-03 10:32:59.478673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.939 10:33:01 -- accel/accel.sh@18 -- # out=' 00:05:30.939 SPDK Configuration: 00:05:30.939 Core mask: 0x1 00:05:30.939 00:05:30.939 Accel Perf Configuration: 00:05:30.939 Workload Type: copy 00:05:30.939 Transfer size: 4096 bytes 00:05:30.939 Vector count 1 00:05:30.939 Module: software 00:05:30.939 Queue depth: 32 00:05:30.939 Allocate depth: 32 00:05:30.939 # threads/core: 1 00:05:30.939 Run time: 1 seconds 00:05:30.939 Verify: Yes 00:05:30.939 00:05:30.939 Running for 1 seconds... 00:05:30.939 00:05:30.939 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:30.939 ------------------------------------------------------------------------------------ 00:05:30.939 0,0 366752/s 1432 MiB/s 0 0 00:05:30.939 ==================================================================================== 00:05:30.939 Total 366752/s 1432 MiB/s 0 0' 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.939 10:33:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:30.939 10:33:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:30.939 10:33:01 -- accel/accel.sh@12 -- # build_accel_config 00:05:30.939 10:33:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:30.939 10:33:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.939 10:33:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.939 10:33:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:30.939 10:33:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:30.939 10:33:01 -- accel/accel.sh@41 -- # local IFS=, 00:05:30.939 10:33:01 -- accel/accel.sh@42 -- # jq -r . 00:05:30.939 [2024-12-03 10:33:01.109997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:30.939 [2024-12-03 10:33:01.110117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58674 ] 00:05:30.939 [2024-12-03 10:33:01.248273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.939 [2024-12-03 10:33:01.393346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.939 10:33:01 -- accel/accel.sh@21 -- # val= 00:05:30.939 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.939 10:33:01 -- accel/accel.sh@21 -- # val= 00:05:30.939 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.939 10:33:01 -- accel/accel.sh@21 -- # val=0x1 00:05:30.939 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.939 10:33:01 -- accel/accel.sh@21 -- # val= 00:05:30.939 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.939 10:33:01 -- accel/accel.sh@21 -- # val= 00:05:30.939 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.939 10:33:01 -- accel/accel.sh@21 -- # val=copy 00:05:30.939 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.939 10:33:01 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.939 10:33:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:30.939 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.939 10:33:01 -- accel/accel.sh@21 -- # val= 00:05:30.939 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.939 10:33:01 -- accel/accel.sh@21 -- # val=software 00:05:30.939 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.939 10:33:01 -- accel/accel.sh@23 -- # accel_module=software 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.939 10:33:01 -- accel/accel.sh@21 -- # val=32 00:05:30.939 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.939 10:33:01 -- accel/accel.sh@21 -- # val=32 00:05:30.939 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.939 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.940 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.940 10:33:01 -- accel/accel.sh@21 -- # val=1 00:05:30.940 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.940 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.940 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.940 10:33:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:30.940 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.940 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.940 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.940 10:33:01 -- accel/accel.sh@21 -- # val=Yes 00:05:30.940 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.940 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.940 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.940 10:33:01 -- accel/accel.sh@21 -- # val= 00:05:30.940 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.940 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.940 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:30.940 10:33:01 -- accel/accel.sh@21 -- # val= 00:05:30.940 10:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.940 10:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:30.940 10:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:32.367 10:33:02 -- accel/accel.sh@21 -- # val= 00:05:32.367 10:33:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.367 10:33:02 -- accel/accel.sh@20 -- # IFS=: 00:05:32.367 10:33:02 -- accel/accel.sh@20 -- # read -r var val 00:05:32.367 10:33:02 -- accel/accel.sh@21 -- # val= 00:05:32.367 10:33:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.367 10:33:02 -- accel/accel.sh@20 -- # IFS=: 00:05:32.367 10:33:02 -- accel/accel.sh@20 -- # read -r var val 00:05:32.367 10:33:02 -- accel/accel.sh@21 -- # val= 00:05:32.367 10:33:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.367 10:33:02 -- accel/accel.sh@20 -- # IFS=: 00:05:32.367 10:33:02 -- accel/accel.sh@20 -- # read -r var val 00:05:32.367 10:33:02 -- accel/accel.sh@21 -- # val= 00:05:32.367 10:33:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.367 10:33:02 -- accel/accel.sh@20 -- # IFS=: 00:05:32.367 10:33:02 -- accel/accel.sh@20 -- # read -r var val 00:05:32.367 10:33:02 -- accel/accel.sh@21 -- # val= 00:05:32.367 10:33:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.367 10:33:02 -- accel/accel.sh@20 -- # IFS=: 00:05:32.367 10:33:02 -- accel/accel.sh@20 -- # read -r var val 00:05:32.367 10:33:02 -- accel/accel.sh@21 -- # val= 00:05:32.367 10:33:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.367 10:33:02 -- accel/accel.sh@20 -- # IFS=: 00:05:32.367 10:33:02 -- accel/accel.sh@20 -- # read -r var val 00:05:32.629 10:33:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:32.629 10:33:02 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:32.629 10:33:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.629 00:05:32.629 real 0m3.839s 00:05:32.629 user 0m3.400s 00:05:32.629 sys 0m0.234s 00:05:32.629 10:33:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.629 10:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:32.629 ************************************ 00:05:32.629 END TEST accel_copy 00:05:32.629 ************************************ 00:05:32.629 10:33:03 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:32.629 10:33:03 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:32.629 10:33:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.629 10:33:03 -- common/autotest_common.sh@10 -- # set +x 00:05:32.629 ************************************ 00:05:32.629 START TEST accel_fill 00:05:32.629 ************************************ 00:05:32.629 10:33:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:32.629 10:33:03 -- accel/accel.sh@16 -- # local accel_opc 00:05:32.629 10:33:03 -- accel/accel.sh@17 -- # local accel_module 00:05:32.629 10:33:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:32.629 10:33:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:32.629 10:33:03 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.629 10:33:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:32.629 10:33:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.629 10:33:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.629 10:33:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:32.629 10:33:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:32.629 10:33:03 -- accel/accel.sh@41 -- # local IFS=, 00:05:32.629 10:33:03 -- accel/accel.sh@42 -- # jq -r . 00:05:32.629 [2024-12-03 10:33:03.060396] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.629 [2024-12-03 10:33:03.060494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58709 ] 00:05:32.629 [2024-12-03 10:33:03.209133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.891 [2024-12-03 10:33:03.359820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.803 10:33:04 -- accel/accel.sh@18 -- # out=' 00:05:34.803 SPDK Configuration: 00:05:34.803 Core mask: 0x1 00:05:34.803 00:05:34.803 Accel Perf Configuration: 00:05:34.803 Workload Type: fill 00:05:34.803 Fill pattern: 0x80 00:05:34.803 Transfer size: 4096 bytes 00:05:34.803 Vector count 1 00:05:34.803 Module: software 00:05:34.803 Queue depth: 64 00:05:34.803 Allocate depth: 64 00:05:34.803 # threads/core: 1 00:05:34.803 Run time: 1 seconds 00:05:34.803 Verify: Yes 00:05:34.803 00:05:34.803 Running for 1 seconds... 00:05:34.803 00:05:34.803 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:34.803 ------------------------------------------------------------------------------------ 00:05:34.803 0,0 591488/s 2310 MiB/s 0 0 00:05:34.803 ==================================================================================== 00:05:34.803 Total 591488/s 2310 MiB/s 0 0' 00:05:34.803 10:33:04 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:04 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:34.803 10:33:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:34.803 10:33:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.803 10:33:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:34.803 10:33:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.803 10:33:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.803 10:33:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:34.803 10:33:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:34.803 10:33:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:34.803 10:33:04 -- accel/accel.sh@42 -- # jq -r . 00:05:34.803 [2024-12-03 10:33:04.976571] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.803 [2024-12-03 10:33:04.976667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58735 ] 00:05:34.803 [2024-12-03 10:33:05.121757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.803 [2024-12-03 10:33:05.260920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val= 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val= 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val=0x1 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val= 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val= 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val=fill 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val=0x80 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val= 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val=software 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@23 -- # accel_module=software 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val=64 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val=64 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val=1 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val=Yes 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val= 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:34.803 10:33:05 -- accel/accel.sh@21 -- # val= 00:05:34.803 10:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # IFS=: 00:05:34.803 10:33:05 -- accel/accel.sh@20 -- # read -r var val 00:05:36.715 10:33:06 -- accel/accel.sh@21 -- # val= 00:05:36.715 10:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.715 10:33:06 -- accel/accel.sh@20 -- # IFS=: 00:05:36.715 10:33:06 -- accel/accel.sh@20 -- # read -r var val 00:05:36.715 10:33:06 -- accel/accel.sh@21 -- # val= 00:05:36.715 10:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.715 10:33:06 -- accel/accel.sh@20 -- # IFS=: 00:05:36.715 10:33:06 -- accel/accel.sh@20 -- # read -r var val 00:05:36.715 10:33:06 -- accel/accel.sh@21 -- # val= 00:05:36.715 10:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.715 10:33:06 -- accel/accel.sh@20 -- # IFS=: 00:05:36.715 10:33:06 -- accel/accel.sh@20 -- # read -r var val 00:05:36.715 10:33:06 -- accel/accel.sh@21 -- # val= 00:05:36.715 10:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.715 10:33:06 -- accel/accel.sh@20 -- # IFS=: 00:05:36.715 10:33:06 -- accel/accel.sh@20 -- # read -r var val 00:05:36.715 10:33:06 -- accel/accel.sh@21 -- # val= 00:05:36.715 10:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.715 10:33:06 -- accel/accel.sh@20 -- # IFS=: 00:05:36.715 10:33:06 -- accel/accel.sh@20 -- # read -r var val 00:05:36.715 10:33:06 -- accel/accel.sh@21 -- # val= 00:05:36.715 10:33:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.715 10:33:06 -- accel/accel.sh@20 -- # IFS=: 00:05:36.715 10:33:06 -- accel/accel.sh@20 -- # read -r var val 00:05:36.715 10:33:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:36.715 10:33:06 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:36.715 10:33:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.715 00:05:36.715 real 0m3.822s 00:05:36.715 user 0m3.390s 00:05:36.715 sys 0m0.223s 00:05:36.715 10:33:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.715 10:33:06 -- common/autotest_common.sh@10 -- # set +x 00:05:36.715 ************************************ 00:05:36.715 END TEST accel_fill 00:05:36.715 ************************************ 00:05:36.715 10:33:06 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:36.715 10:33:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:36.715 10:33:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.715 10:33:06 -- common/autotest_common.sh@10 -- # set +x 00:05:36.715 ************************************ 00:05:36.715 START TEST accel_copy_crc32c 00:05:36.715 ************************************ 00:05:36.715 10:33:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:05:36.715 10:33:06 -- accel/accel.sh@16 -- # local accel_opc 00:05:36.715 10:33:06 -- accel/accel.sh@17 -- # local accel_module 00:05:36.715 10:33:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:36.715 10:33:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:36.715 10:33:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.715 10:33:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:36.715 10:33:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.715 10:33:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.715 10:33:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:36.715 10:33:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:36.715 10:33:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:36.715 10:33:06 -- accel/accel.sh@42 -- # jq -r . 00:05:36.716 [2024-12-03 10:33:06.920448] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.716 [2024-12-03 10:33:06.920553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58776 ] 00:05:36.716 [2024-12-03 10:33:07.065985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.716 [2024-12-03 10:33:07.211284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.663 10:33:08 -- accel/accel.sh@18 -- # out=' 00:05:38.663 SPDK Configuration: 00:05:38.663 Core mask: 0x1 00:05:38.663 00:05:38.663 Accel Perf Configuration: 00:05:38.663 Workload Type: copy_crc32c 00:05:38.663 CRC-32C seed: 0 00:05:38.663 Vector size: 4096 bytes 00:05:38.663 Transfer size: 4096 bytes 00:05:38.663 Vector count 1 00:05:38.663 Module: software 00:05:38.663 Queue depth: 32 00:05:38.664 Allocate depth: 32 00:05:38.664 # threads/core: 1 00:05:38.664 Run time: 1 seconds 00:05:38.664 Verify: Yes 00:05:38.664 00:05:38.664 Running for 1 seconds... 00:05:38.664 00:05:38.664 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:38.664 ------------------------------------------------------------------------------------ 00:05:38.664 0,0 310016/s 1211 MiB/s 0 0 00:05:38.664 ==================================================================================== 00:05:38.664 Total 310016/s 1211 MiB/s 0 0' 00:05:38.664 10:33:08 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:08 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:38.664 10:33:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:38.664 10:33:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.664 10:33:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:38.664 10:33:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.664 10:33:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.664 10:33:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:38.664 10:33:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:38.664 10:33:08 -- accel/accel.sh@41 -- # local IFS=, 00:05:38.664 10:33:08 -- accel/accel.sh@42 -- # jq -r . 00:05:38.664 [2024-12-03 10:33:08.838119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:38.664 [2024-12-03 10:33:08.838221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58802 ] 00:05:38.664 [2024-12-03 10:33:08.985445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.664 [2024-12-03 10:33:09.131992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val= 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val= 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val=0x1 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val= 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val= 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val=0 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val= 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val=software 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@23 -- # accel_module=software 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val=32 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val=32 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val=1 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val=Yes 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val= 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:38.664 10:33:09 -- accel/accel.sh@21 -- # val= 00:05:38.664 10:33:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # IFS=: 00:05:38.664 10:33:09 -- accel/accel.sh@20 -- # read -r var val 00:05:40.573 10:33:10 -- accel/accel.sh@21 -- # val= 00:05:40.573 10:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.573 10:33:10 -- accel/accel.sh@20 -- # IFS=: 00:05:40.573 10:33:10 -- accel/accel.sh@20 -- # read -r var val 00:05:40.573 10:33:10 -- accel/accel.sh@21 -- # val= 00:05:40.573 10:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.573 10:33:10 -- accel/accel.sh@20 -- # IFS=: 00:05:40.573 10:33:10 -- accel/accel.sh@20 -- # read -r var val 00:05:40.573 10:33:10 -- accel/accel.sh@21 -- # val= 00:05:40.573 10:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.573 10:33:10 -- accel/accel.sh@20 -- # IFS=: 00:05:40.573 10:33:10 -- accel/accel.sh@20 -- # read -r var val 00:05:40.573 10:33:10 -- accel/accel.sh@21 -- # val= 00:05:40.573 10:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.573 10:33:10 -- accel/accel.sh@20 -- # IFS=: 00:05:40.573 10:33:10 -- accel/accel.sh@20 -- # read -r var val 00:05:40.573 10:33:10 -- accel/accel.sh@21 -- # val= 00:05:40.573 10:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.573 10:33:10 -- accel/accel.sh@20 -- # IFS=: 00:05:40.573 10:33:10 -- accel/accel.sh@20 -- # read -r var val 00:05:40.573 10:33:10 -- accel/accel.sh@21 -- # val= 00:05:40.573 10:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.573 10:33:10 -- accel/accel.sh@20 -- # IFS=: 00:05:40.573 10:33:10 -- accel/accel.sh@20 -- # read -r var val 00:05:40.573 10:33:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:40.573 10:33:10 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:40.573 10:33:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.573 00:05:40.573 real 0m3.835s 00:05:40.573 user 0m3.390s 00:05:40.574 sys 0m0.241s 00:05:40.574 10:33:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.574 ************************************ 00:05:40.574 END TEST accel_copy_crc32c 00:05:40.574 ************************************ 00:05:40.574 10:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:40.574 10:33:10 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:40.574 10:33:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:40.574 10:33:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.574 10:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:40.574 ************************************ 00:05:40.574 START TEST accel_copy_crc32c_C2 00:05:40.574 ************************************ 00:05:40.574 10:33:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:40.574 10:33:10 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.574 10:33:10 -- accel/accel.sh@17 -- # local accel_module 00:05:40.574 10:33:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:40.574 10:33:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:40.574 10:33:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.574 10:33:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.574 10:33:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.574 10:33:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.574 10:33:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.574 10:33:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.574 10:33:10 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.574 10:33:10 -- accel/accel.sh@42 -- # jq -r . 00:05:40.574 [2024-12-03 10:33:10.790379] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.574 [2024-12-03 10:33:10.790490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58843 ] 00:05:40.574 [2024-12-03 10:33:10.937371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.574 [2024-12-03 10:33:11.079351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.491 10:33:12 -- accel/accel.sh@18 -- # out=' 00:05:42.491 SPDK Configuration: 00:05:42.491 Core mask: 0x1 00:05:42.491 00:05:42.491 Accel Perf Configuration: 00:05:42.491 Workload Type: copy_crc32c 00:05:42.491 CRC-32C seed: 0 00:05:42.491 Vector size: 4096 bytes 00:05:42.491 Transfer size: 8192 bytes 00:05:42.491 Vector count 2 00:05:42.491 Module: software 00:05:42.491 Queue depth: 32 00:05:42.491 Allocate depth: 32 00:05:42.491 # threads/core: 1 00:05:42.491 Run time: 1 seconds 00:05:42.491 Verify: Yes 00:05:42.491 00:05:42.491 Running for 1 seconds... 00:05:42.491 00:05:42.491 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:42.491 ------------------------------------------------------------------------------------ 00:05:42.491 0,0 231296/s 1807 MiB/s 0 0 00:05:42.491 ==================================================================================== 00:05:42.491 Total 231296/s 903 MiB/s 0 0' 00:05:42.491 10:33:12 -- accel/accel.sh@20 -- # IFS=: 00:05:42.491 10:33:12 -- accel/accel.sh@20 -- # read -r var val 00:05:42.491 10:33:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:42.491 10:33:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:42.491 10:33:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.491 10:33:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.491 10:33:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.491 10:33:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.491 10:33:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.491 10:33:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.491 10:33:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.491 10:33:12 -- accel/accel.sh@42 -- # jq -r . 00:05:42.491 [2024-12-03 10:33:12.703770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.491 [2024-12-03 10:33:12.703873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58866 ] 00:05:42.491 [2024-12-03 10:33:12.849975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.491 [2024-12-03 10:33:12.991438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val= 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val= 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val=0x1 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val= 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val= 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val=0 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val= 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val=software 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@23 -- # accel_module=software 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val=32 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val=32 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val=1 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val=Yes 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val= 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:42.753 10:33:13 -- accel/accel.sh@21 -- # val= 00:05:42.753 10:33:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # IFS=: 00:05:42.753 10:33:13 -- accel/accel.sh@20 -- # read -r var val 00:05:44.140 10:33:14 -- accel/accel.sh@21 -- # val= 00:05:44.140 10:33:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.140 10:33:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.140 10:33:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.140 10:33:14 -- accel/accel.sh@21 -- # val= 00:05:44.140 10:33:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.140 10:33:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.140 10:33:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.140 10:33:14 -- accel/accel.sh@21 -- # val= 00:05:44.140 10:33:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.140 10:33:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.140 10:33:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.140 10:33:14 -- accel/accel.sh@21 -- # val= 00:05:44.140 10:33:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.140 10:33:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.140 10:33:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.140 10:33:14 -- accel/accel.sh@21 -- # val= 00:05:44.140 10:33:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.140 10:33:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.140 10:33:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.140 10:33:14 -- accel/accel.sh@21 -- # val= 00:05:44.140 10:33:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.140 10:33:14 -- accel/accel.sh@20 -- # IFS=: 00:05:44.140 10:33:14 -- accel/accel.sh@20 -- # read -r var val 00:05:44.140 10:33:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:44.140 10:33:14 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:44.140 10:33:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.140 00:05:44.140 real 0m3.833s 00:05:44.140 user 0m3.391s 00:05:44.140 sys 0m0.238s 00:05:44.140 10:33:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.140 10:33:14 -- common/autotest_common.sh@10 -- # set +x 00:05:44.140 ************************************ 00:05:44.140 END TEST accel_copy_crc32c_C2 00:05:44.140 ************************************ 00:05:44.140 10:33:14 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:44.140 10:33:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:44.140 10:33:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.140 10:33:14 -- common/autotest_common.sh@10 -- # set +x 00:05:44.140 ************************************ 00:05:44.140 START TEST accel_dualcast 00:05:44.140 ************************************ 00:05:44.140 10:33:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:05:44.140 10:33:14 -- accel/accel.sh@16 -- # local accel_opc 00:05:44.140 10:33:14 -- accel/accel.sh@17 -- # local accel_module 00:05:44.140 10:33:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:44.140 10:33:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:44.140 10:33:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.140 10:33:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.140 10:33:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.140 10:33:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.140 10:33:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.140 10:33:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.140 10:33:14 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.140 10:33:14 -- accel/accel.sh@42 -- # jq -r . 00:05:44.140 [2024-12-03 10:33:14.664248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.140 [2024-12-03 10:33:14.664352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58905 ] 00:05:44.401 [2024-12-03 10:33:14.809680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.401 [2024-12-03 10:33:14.958400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.315 10:33:16 -- accel/accel.sh@18 -- # out=' 00:05:46.315 SPDK Configuration: 00:05:46.315 Core mask: 0x1 00:05:46.315 00:05:46.315 Accel Perf Configuration: 00:05:46.315 Workload Type: dualcast 00:05:46.316 Transfer size: 4096 bytes 00:05:46.316 Vector count 1 00:05:46.316 Module: software 00:05:46.316 Queue depth: 32 00:05:46.316 Allocate depth: 32 00:05:46.316 # threads/core: 1 00:05:46.316 Run time: 1 seconds 00:05:46.316 Verify: Yes 00:05:46.316 00:05:46.316 Running for 1 seconds... 00:05:46.316 00:05:46.316 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:46.316 ------------------------------------------------------------------------------------ 00:05:46.316 0,0 430688/s 1682 MiB/s 0 0 00:05:46.316 ==================================================================================== 00:05:46.316 Total 430688/s 1682 MiB/s 0 0' 00:05:46.316 10:33:16 -- accel/accel.sh@20 -- # IFS=: 00:05:46.316 10:33:16 -- accel/accel.sh@20 -- # read -r var val 00:05:46.316 10:33:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:46.316 10:33:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:46.316 10:33:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.316 10:33:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.316 10:33:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.316 10:33:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.316 10:33:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.316 10:33:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.316 10:33:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.316 10:33:16 -- accel/accel.sh@42 -- # jq -r . 00:05:46.316 [2024-12-03 10:33:16.587099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.316 [2024-12-03 10:33:16.587205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58933 ] 00:05:46.316 [2024-12-03 10:33:16.732257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.316 [2024-12-03 10:33:16.882588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.576 10:33:16 -- accel/accel.sh@21 -- # val= 00:05:46.576 10:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.576 10:33:16 -- accel/accel.sh@20 -- # IFS=: 00:05:46.576 10:33:16 -- accel/accel.sh@20 -- # read -r var val 00:05:46.576 10:33:16 -- accel/accel.sh@21 -- # val= 00:05:46.576 10:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.576 10:33:16 -- accel/accel.sh@20 -- # IFS=: 00:05:46.576 10:33:16 -- accel/accel.sh@20 -- # read -r var val 00:05:46.576 10:33:16 -- accel/accel.sh@21 -- # val=0x1 00:05:46.576 10:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.576 10:33:16 -- accel/accel.sh@20 -- # IFS=: 00:05:46.576 10:33:16 -- accel/accel.sh@20 -- # read -r var val 00:05:46.576 10:33:16 -- accel/accel.sh@21 -- # val= 00:05:46.576 10:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.576 10:33:16 -- accel/accel.sh@20 -- # IFS=: 00:05:46.576 10:33:16 -- accel/accel.sh@20 -- # read -r var val 00:05:46.576 10:33:17 -- accel/accel.sh@21 -- # val= 00:05:46.576 10:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.576 10:33:17 -- accel/accel.sh@20 -- # IFS=: 00:05:46.576 10:33:17 -- accel/accel.sh@20 -- # read -r var val 00:05:46.576 10:33:17 -- accel/accel.sh@21 -- # val=dualcast 00:05:46.576 10:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.576 10:33:17 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:46.576 10:33:17 -- accel/accel.sh@20 -- # IFS=: 00:05:46.576 10:33:17 -- accel/accel.sh@20 -- # read -r var val 00:05:46.576 10:33:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:46.577 10:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # IFS=: 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # read -r var val 00:05:46.577 10:33:17 -- accel/accel.sh@21 -- # val= 00:05:46.577 10:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # IFS=: 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # read -r var val 00:05:46.577 10:33:17 -- accel/accel.sh@21 -- # val=software 00:05:46.577 10:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.577 10:33:17 -- accel/accel.sh@23 -- # accel_module=software 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # IFS=: 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # read -r var val 00:05:46.577 10:33:17 -- accel/accel.sh@21 -- # val=32 00:05:46.577 10:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # IFS=: 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # read -r var val 00:05:46.577 10:33:17 -- accel/accel.sh@21 -- # val=32 00:05:46.577 10:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # IFS=: 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # read -r var val 00:05:46.577 10:33:17 -- accel/accel.sh@21 -- # val=1 00:05:46.577 10:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # IFS=: 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # read -r var val 00:05:46.577 10:33:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:46.577 10:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # IFS=: 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # read -r var val 00:05:46.577 10:33:17 -- accel/accel.sh@21 -- # val=Yes 00:05:46.577 10:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # IFS=: 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # read -r var val 00:05:46.577 10:33:17 -- accel/accel.sh@21 -- # val= 00:05:46.577 10:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # IFS=: 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # read -r var val 00:05:46.577 10:33:17 -- accel/accel.sh@21 -- # val= 00:05:46.577 10:33:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # IFS=: 00:05:46.577 10:33:17 -- accel/accel.sh@20 -- # read -r var val 00:05:47.962 10:33:18 -- accel/accel.sh@21 -- # val= 00:05:47.962 10:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.962 10:33:18 -- accel/accel.sh@20 -- # IFS=: 00:05:47.962 10:33:18 -- accel/accel.sh@20 -- # read -r var val 00:05:47.962 10:33:18 -- accel/accel.sh@21 -- # val= 00:05:47.962 10:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.962 10:33:18 -- accel/accel.sh@20 -- # IFS=: 00:05:47.962 10:33:18 -- accel/accel.sh@20 -- # read -r var val 00:05:47.962 10:33:18 -- accel/accel.sh@21 -- # val= 00:05:47.962 10:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.963 10:33:18 -- accel/accel.sh@20 -- # IFS=: 00:05:47.963 10:33:18 -- accel/accel.sh@20 -- # read -r var val 00:05:47.963 10:33:18 -- accel/accel.sh@21 -- # val= 00:05:47.963 10:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.963 10:33:18 -- accel/accel.sh@20 -- # IFS=: 00:05:47.963 10:33:18 -- accel/accel.sh@20 -- # read -r var val 00:05:47.963 10:33:18 -- accel/accel.sh@21 -- # val= 00:05:47.963 10:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.963 10:33:18 -- accel/accel.sh@20 -- # IFS=: 00:05:47.963 10:33:18 -- accel/accel.sh@20 -- # read -r var val 00:05:47.963 10:33:18 -- accel/accel.sh@21 -- # val= 00:05:47.963 10:33:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.963 10:33:18 -- accel/accel.sh@20 -- # IFS=: 00:05:47.963 10:33:18 -- accel/accel.sh@20 -- # read -r var val 00:05:47.963 10:33:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:47.963 10:33:18 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:47.963 10:33:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.963 00:05:47.963 real 0m3.857s 00:05:47.963 user 0m3.430s 00:05:47.963 sys 0m0.222s 00:05:47.963 10:33:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.963 10:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:47.963 ************************************ 00:05:47.963 END TEST accel_dualcast 00:05:47.963 ************************************ 00:05:47.963 10:33:18 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:47.963 10:33:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:47.963 10:33:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.963 10:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:47.963 ************************************ 00:05:47.963 START TEST accel_compare 00:05:47.963 ************************************ 00:05:47.963 10:33:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:05:47.963 10:33:18 -- accel/accel.sh@16 -- # local accel_opc 00:05:47.963 10:33:18 -- accel/accel.sh@17 -- # local accel_module 00:05:47.963 10:33:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:47.963 10:33:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:47.963 10:33:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.963 10:33:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.963 10:33:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.963 10:33:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.963 10:33:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.963 10:33:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.963 10:33:18 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.963 10:33:18 -- accel/accel.sh@42 -- # jq -r . 00:05:47.963 [2024-12-03 10:33:18.561019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.963 [2024-12-03 10:33:18.561128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58974 ] 00:05:48.223 [2024-12-03 10:33:18.712767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.484 [2024-12-03 10:33:18.916557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.399 10:33:20 -- accel/accel.sh@18 -- # out=' 00:05:50.399 SPDK Configuration: 00:05:50.399 Core mask: 0x1 00:05:50.399 00:05:50.399 Accel Perf Configuration: 00:05:50.399 Workload Type: compare 00:05:50.399 Transfer size: 4096 bytes 00:05:50.399 Vector count 1 00:05:50.399 Module: software 00:05:50.399 Queue depth: 32 00:05:50.399 Allocate depth: 32 00:05:50.399 # threads/core: 1 00:05:50.399 Run time: 1 seconds 00:05:50.399 Verify: Yes 00:05:50.399 00:05:50.399 Running for 1 seconds... 00:05:50.399 00:05:50.399 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:50.399 ------------------------------------------------------------------------------------ 00:05:50.399 0,0 425152/s 1660 MiB/s 0 0 00:05:50.399 ==================================================================================== 00:05:50.399 Total 425152/s 1660 MiB/s 0 0' 00:05:50.399 10:33:20 -- accel/accel.sh@20 -- # IFS=: 00:05:50.399 10:33:20 -- accel/accel.sh@20 -- # read -r var val 00:05:50.399 10:33:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:50.399 10:33:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:50.399 10:33:20 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.399 10:33:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.399 10:33:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.399 10:33:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.399 10:33:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.399 10:33:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.399 10:33:20 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.399 10:33:20 -- accel/accel.sh@42 -- # jq -r . 00:05:50.399 [2024-12-03 10:33:20.756129] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.399 [2024-12-03 10:33:20.756231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59000 ] 00:05:50.399 [2024-12-03 10:33:20.901671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.720 [2024-12-03 10:33:21.104272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val= 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val= 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val=0x1 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val= 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val= 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val=compare 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val= 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val=software 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@23 -- # accel_module=software 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val=32 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val=32 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val=1 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val=Yes 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val= 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:50.720 10:33:21 -- accel/accel.sh@21 -- # val= 00:05:50.720 10:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # IFS=: 00:05:50.720 10:33:21 -- accel/accel.sh@20 -- # read -r var val 00:05:52.643 10:33:22 -- accel/accel.sh@21 -- # val= 00:05:52.643 10:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.643 10:33:22 -- accel/accel.sh@20 -- # IFS=: 00:05:52.643 10:33:22 -- accel/accel.sh@20 -- # read -r var val 00:05:52.643 10:33:22 -- accel/accel.sh@21 -- # val= 00:05:52.643 10:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.643 10:33:22 -- accel/accel.sh@20 -- # IFS=: 00:05:52.643 10:33:22 -- accel/accel.sh@20 -- # read -r var val 00:05:52.643 10:33:22 -- accel/accel.sh@21 -- # val= 00:05:52.643 10:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.643 10:33:22 -- accel/accel.sh@20 -- # IFS=: 00:05:52.643 10:33:22 -- accel/accel.sh@20 -- # read -r var val 00:05:52.643 10:33:22 -- accel/accel.sh@21 -- # val= 00:05:52.643 10:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.643 10:33:22 -- accel/accel.sh@20 -- # IFS=: 00:05:52.643 10:33:22 -- accel/accel.sh@20 -- # read -r var val 00:05:52.643 10:33:22 -- accel/accel.sh@21 -- # val= 00:05:52.643 10:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.643 10:33:22 -- accel/accel.sh@20 -- # IFS=: 00:05:52.643 10:33:22 -- accel/accel.sh@20 -- # read -r var val 00:05:52.643 10:33:22 -- accel/accel.sh@21 -- # val= 00:05:52.643 10:33:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.643 10:33:22 -- accel/accel.sh@20 -- # IFS=: 00:05:52.643 10:33:22 -- accel/accel.sh@20 -- # read -r var val 00:05:52.643 10:33:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:52.643 10:33:22 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:52.643 10:33:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.643 00:05:52.643 real 0m4.370s 00:05:52.643 user 0m3.885s 00:05:52.643 sys 0m0.274s 00:05:52.643 10:33:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.643 10:33:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.643 ************************************ 00:05:52.643 END TEST accel_compare 00:05:52.643 ************************************ 00:05:52.643 10:33:22 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:52.643 10:33:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:52.643 10:33:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.643 10:33:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.643 ************************************ 00:05:52.643 START TEST accel_xor 00:05:52.643 ************************************ 00:05:52.643 10:33:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:05:52.643 10:33:22 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.643 10:33:22 -- accel/accel.sh@17 -- # local accel_module 00:05:52.643 10:33:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:52.643 10:33:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:52.643 10:33:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.643 10:33:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.643 10:33:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.643 10:33:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.643 10:33:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.643 10:33:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.643 10:33:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.643 10:33:22 -- accel/accel.sh@42 -- # jq -r . 00:05:52.643 [2024-12-03 10:33:22.969933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.643 [2024-12-03 10:33:22.970065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59042 ] 00:05:52.643 [2024-12-03 10:33:23.119783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.901 [2024-12-03 10:33:23.322536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.796 10:33:25 -- accel/accel.sh@18 -- # out=' 00:05:54.796 SPDK Configuration: 00:05:54.796 Core mask: 0x1 00:05:54.796 00:05:54.796 Accel Perf Configuration: 00:05:54.796 Workload Type: xor 00:05:54.796 Source buffers: 2 00:05:54.796 Transfer size: 4096 bytes 00:05:54.796 Vector count 1 00:05:54.796 Module: software 00:05:54.796 Queue depth: 32 00:05:54.796 Allocate depth: 32 00:05:54.796 # threads/core: 1 00:05:54.796 Run time: 1 seconds 00:05:54.796 Verify: Yes 00:05:54.796 00:05:54.796 Running for 1 seconds... 00:05:54.796 00:05:54.796 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:54.796 ------------------------------------------------------------------------------------ 00:05:54.796 0,0 332736/s 1299 MiB/s 0 0 00:05:54.796 ==================================================================================== 00:05:54.796 Total 332736/s 1299 MiB/s 0 0' 00:05:54.796 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:54.796 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:54.796 10:33:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:54.797 10:33:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:54.797 10:33:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.797 10:33:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.797 10:33:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.797 10:33:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.797 10:33:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.797 10:33:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.797 10:33:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.797 10:33:25 -- accel/accel.sh@42 -- # jq -r . 00:05:54.797 [2024-12-03 10:33:25.159800] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.797 [2024-12-03 10:33:25.159914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59070 ] 00:05:54.797 [2024-12-03 10:33:25.306444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.054 [2024-12-03 10:33:25.513900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val= 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val= 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val=0x1 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val= 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val= 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val=xor 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val=2 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val= 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val=software 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@23 -- # accel_module=software 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val=32 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val=32 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val=1 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val=Yes 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val= 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:55.312 10:33:25 -- accel/accel.sh@21 -- # val= 00:05:55.312 10:33:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # IFS=: 00:05:55.312 10:33:25 -- accel/accel.sh@20 -- # read -r var val 00:05:56.685 10:33:27 -- accel/accel.sh@21 -- # val= 00:05:56.943 10:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.943 10:33:27 -- accel/accel.sh@20 -- # IFS=: 00:05:56.943 10:33:27 -- accel/accel.sh@20 -- # read -r var val 00:05:56.943 10:33:27 -- accel/accel.sh@21 -- # val= 00:05:56.943 10:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.943 10:33:27 -- accel/accel.sh@20 -- # IFS=: 00:05:56.943 10:33:27 -- accel/accel.sh@20 -- # read -r var val 00:05:56.943 10:33:27 -- accel/accel.sh@21 -- # val= 00:05:56.943 10:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.943 10:33:27 -- accel/accel.sh@20 -- # IFS=: 00:05:56.943 10:33:27 -- accel/accel.sh@20 -- # read -r var val 00:05:56.943 10:33:27 -- accel/accel.sh@21 -- # val= 00:05:56.943 10:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.943 10:33:27 -- accel/accel.sh@20 -- # IFS=: 00:05:56.943 10:33:27 -- accel/accel.sh@20 -- # read -r var val 00:05:56.943 10:33:27 -- accel/accel.sh@21 -- # val= 00:05:56.943 10:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.943 10:33:27 -- accel/accel.sh@20 -- # IFS=: 00:05:56.943 10:33:27 -- accel/accel.sh@20 -- # read -r var val 00:05:56.943 10:33:27 -- accel/accel.sh@21 -- # val= 00:05:56.943 10:33:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.943 10:33:27 -- accel/accel.sh@20 -- # IFS=: 00:05:56.943 10:33:27 -- accel/accel.sh@20 -- # read -r var val 00:05:56.943 10:33:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:56.943 10:33:27 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:56.943 10:33:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.943 00:05:56.943 real 0m4.370s 00:05:56.943 user 0m3.857s 00:05:56.943 sys 0m0.301s 00:05:56.943 10:33:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.943 10:33:27 -- common/autotest_common.sh@10 -- # set +x 00:05:56.943 ************************************ 00:05:56.943 END TEST accel_xor 00:05:56.943 ************************************ 00:05:56.943 10:33:27 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:56.943 10:33:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:56.943 10:33:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.943 10:33:27 -- common/autotest_common.sh@10 -- # set +x 00:05:56.943 ************************************ 00:05:56.943 START TEST accel_xor 00:05:56.943 ************************************ 00:05:56.943 10:33:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:05:56.943 10:33:27 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.943 10:33:27 -- accel/accel.sh@17 -- # local accel_module 00:05:56.943 10:33:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:56.943 10:33:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:56.943 10:33:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.943 10:33:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.943 10:33:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.943 10:33:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.943 10:33:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.943 10:33:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.943 10:33:27 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.943 10:33:27 -- accel/accel.sh@42 -- # jq -r . 00:05:56.943 [2024-12-03 10:33:27.378745] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.943 [2024-12-03 10:33:27.378843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59116 ] 00:05:56.943 [2024-12-03 10:33:27.527876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.200 [2024-12-03 10:33:27.740507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.099 10:33:29 -- accel/accel.sh@18 -- # out=' 00:05:59.099 SPDK Configuration: 00:05:59.099 Core mask: 0x1 00:05:59.099 00:05:59.099 Accel Perf Configuration: 00:05:59.099 Workload Type: xor 00:05:59.099 Source buffers: 3 00:05:59.099 Transfer size: 4096 bytes 00:05:59.099 Vector count 1 00:05:59.099 Module: software 00:05:59.099 Queue depth: 32 00:05:59.099 Allocate depth: 32 00:05:59.099 # threads/core: 1 00:05:59.099 Run time: 1 seconds 00:05:59.099 Verify: Yes 00:05:59.099 00:05:59.099 Running for 1 seconds... 00:05:59.099 00:05:59.099 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:59.099 ------------------------------------------------------------------------------------ 00:05:59.099 0,0 321184/s 1254 MiB/s 0 0 00:05:59.099 ==================================================================================== 00:05:59.099 Total 321184/s 1254 MiB/s 0 0' 00:05:59.099 10:33:29 -- accel/accel.sh@20 -- # IFS=: 00:05:59.099 10:33:29 -- accel/accel.sh@20 -- # read -r var val 00:05:59.099 10:33:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:59.099 10:33:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.099 10:33:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:59.099 10:33:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.099 10:33:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.099 10:33:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.099 10:33:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.099 10:33:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.099 10:33:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.099 10:33:29 -- accel/accel.sh@42 -- # jq -r . 00:05:59.099 [2024-12-03 10:33:29.540356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.099 [2024-12-03 10:33:29.540458] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59147 ] 00:05:59.099 [2024-12-03 10:33:29.685360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.357 [2024-12-03 10:33:29.868337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.616 10:33:29 -- accel/accel.sh@21 -- # val= 00:05:59.616 10:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:29 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:29 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:29 -- accel/accel.sh@21 -- # val= 00:05:59.616 10:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:29 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:29 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:29 -- accel/accel.sh@21 -- # val=0x1 00:05:59.616 10:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:29 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:29 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:29 -- accel/accel.sh@21 -- # val= 00:05:59.616 10:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:29 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:29 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:29 -- accel/accel.sh@21 -- # val= 00:05:59.616 10:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:29 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:29 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:29 -- accel/accel.sh@21 -- # val=xor 00:05:59.616 10:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:29 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:59.616 10:33:29 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:29 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:29 -- accel/accel.sh@21 -- # val=3 00:05:59.616 10:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:29 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:59.616 10:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:30 -- accel/accel.sh@21 -- # val= 00:05:59.616 10:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:30 -- accel/accel.sh@21 -- # val=software 00:05:59.616 10:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:30 -- accel/accel.sh@23 -- # accel_module=software 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:30 -- accel/accel.sh@21 -- # val=32 00:05:59.616 10:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:30 -- accel/accel.sh@21 -- # val=32 00:05:59.616 10:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:30 -- accel/accel.sh@21 -- # val=1 00:05:59.616 10:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:59.616 10:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:30 -- accel/accel.sh@21 -- # val=Yes 00:05:59.616 10:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:30 -- accel/accel.sh@21 -- # val= 00:05:59.616 10:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # read -r var val 00:05:59.616 10:33:30 -- accel/accel.sh@21 -- # val= 00:05:59.616 10:33:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # IFS=: 00:05:59.616 10:33:30 -- accel/accel.sh@20 -- # read -r var val 00:06:00.991 10:33:31 -- accel/accel.sh@21 -- # val= 00:06:00.991 10:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.991 10:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:00.991 10:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:00.991 10:33:31 -- accel/accel.sh@21 -- # val= 00:06:00.991 10:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.991 10:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:00.991 10:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:00.991 10:33:31 -- accel/accel.sh@21 -- # val= 00:06:00.991 10:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.991 10:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:00.991 10:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:00.991 10:33:31 -- accel/accel.sh@21 -- # val= 00:06:00.991 10:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.991 10:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:00.991 10:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:00.991 10:33:31 -- accel/accel.sh@21 -- # val= 00:06:00.991 10:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.991 10:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:00.991 10:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:00.991 10:33:31 -- accel/accel.sh@21 -- # val= 00:06:00.991 10:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.991 10:33:31 -- accel/accel.sh@20 -- # IFS=: 00:06:00.991 10:33:31 -- accel/accel.sh@20 -- # read -r var val 00:06:00.991 10:33:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:00.991 10:33:31 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:00.991 10:33:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.991 00:06:00.991 real 0m4.176s 00:06:00.991 user 0m3.683s 00:06:00.991 sys 0m0.283s 00:06:00.991 10:33:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.991 10:33:31 -- common/autotest_common.sh@10 -- # set +x 00:06:00.991 ************************************ 00:06:00.991 END TEST accel_xor 00:06:00.991 ************************************ 00:06:00.991 10:33:31 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:00.991 10:33:31 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:00.991 10:33:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.991 10:33:31 -- common/autotest_common.sh@10 -- # set +x 00:06:00.991 ************************************ 00:06:00.991 START TEST accel_dif_verify 00:06:00.991 ************************************ 00:06:00.991 10:33:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:00.991 10:33:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.991 10:33:31 -- accel/accel.sh@17 -- # local accel_module 00:06:00.991 10:33:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:00.991 10:33:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:00.991 10:33:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.991 10:33:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.991 10:33:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.991 10:33:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.991 10:33:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.991 10:33:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.991 10:33:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.991 10:33:31 -- accel/accel.sh@42 -- # jq -r . 00:06:00.991 [2024-12-03 10:33:31.589534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.991 [2024-12-03 10:33:31.589676] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59189 ] 00:06:01.249 [2024-12-03 10:33:31.735341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.507 [2024-12-03 10:33:31.918449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.972 10:33:33 -- accel/accel.sh@18 -- # out=' 00:06:02.972 SPDK Configuration: 00:06:02.972 Core mask: 0x1 00:06:02.972 00:06:02.972 Accel Perf Configuration: 00:06:02.972 Workload Type: dif_verify 00:06:02.972 Vector size: 4096 bytes 00:06:02.972 Transfer size: 4096 bytes 00:06:02.972 Block size: 512 bytes 00:06:02.972 Metadata size: 8 bytes 00:06:02.972 Vector count 1 00:06:02.972 Module: software 00:06:02.972 Queue depth: 32 00:06:02.972 Allocate depth: 32 00:06:02.972 # threads/core: 1 00:06:02.972 Run time: 1 seconds 00:06:02.972 Verify: No 00:06:02.972 00:06:02.972 Running for 1 seconds... 00:06:02.972 00:06:02.972 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:02.972 ------------------------------------------------------------------------------------ 00:06:02.972 0,0 124960/s 495 MiB/s 0 0 00:06:02.972 ==================================================================================== 00:06:02.972 Total 124960/s 488 MiB/s 0 0' 00:06:02.972 10:33:33 -- accel/accel.sh@20 -- # IFS=: 00:06:02.972 10:33:33 -- accel/accel.sh@20 -- # read -r var val 00:06:02.972 10:33:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:02.972 10:33:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:02.972 10:33:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.972 10:33:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.972 10:33:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.972 10:33:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.972 10:33:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.972 10:33:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.972 10:33:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.972 10:33:33 -- accel/accel.sh@42 -- # jq -r . 00:06:03.230 [2024-12-03 10:33:33.599827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.230 [2024-12-03 10:33:33.599929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59209 ] 00:06:03.230 [2024-12-03 10:33:33.746525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.487 [2024-12-03 10:33:33.928916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val= 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val= 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val=0x1 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val= 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val= 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val=dif_verify 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val= 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val=software 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@23 -- # accel_module=software 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val=32 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val=32 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val=1 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val=No 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val= 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:03.488 10:33:34 -- accel/accel.sh@21 -- # val= 00:06:03.488 10:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # IFS=: 00:06:03.488 10:33:34 -- accel/accel.sh@20 -- # read -r var val 00:06:05.384 10:33:35 -- accel/accel.sh@21 -- # val= 00:06:05.384 10:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.384 10:33:35 -- accel/accel.sh@20 -- # IFS=: 00:06:05.384 10:33:35 -- accel/accel.sh@20 -- # read -r var val 00:06:05.384 10:33:35 -- accel/accel.sh@21 -- # val= 00:06:05.384 10:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.384 10:33:35 -- accel/accel.sh@20 -- # IFS=: 00:06:05.384 10:33:35 -- accel/accel.sh@20 -- # read -r var val 00:06:05.384 10:33:35 -- accel/accel.sh@21 -- # val= 00:06:05.384 10:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.384 10:33:35 -- accel/accel.sh@20 -- # IFS=: 00:06:05.384 10:33:35 -- accel/accel.sh@20 -- # read -r var val 00:06:05.384 10:33:35 -- accel/accel.sh@21 -- # val= 00:06:05.384 10:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.384 10:33:35 -- accel/accel.sh@20 -- # IFS=: 00:06:05.384 10:33:35 -- accel/accel.sh@20 -- # read -r var val 00:06:05.384 10:33:35 -- accel/accel.sh@21 -- # val= 00:06:05.384 10:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.384 10:33:35 -- accel/accel.sh@20 -- # IFS=: 00:06:05.384 10:33:35 -- accel/accel.sh@20 -- # read -r var val 00:06:05.384 10:33:35 -- accel/accel.sh@21 -- # val= 00:06:05.384 10:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.384 10:33:35 -- accel/accel.sh@20 -- # IFS=: 00:06:05.384 10:33:35 -- accel/accel.sh@20 -- # read -r var val 00:06:05.384 10:33:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:05.384 10:33:35 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:05.384 10:33:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.384 00:06:05.384 real 0m4.014s 00:06:05.384 user 0m3.551s 00:06:05.384 sys 0m0.256s 00:06:05.384 10:33:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.384 ************************************ 00:06:05.384 END TEST accel_dif_verify 00:06:05.384 ************************************ 00:06:05.384 10:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.384 10:33:35 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:05.384 10:33:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:05.384 10:33:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.384 10:33:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.384 ************************************ 00:06:05.384 START TEST accel_dif_generate 00:06:05.384 ************************************ 00:06:05.384 10:33:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:05.384 10:33:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.384 10:33:35 -- accel/accel.sh@17 -- # local accel_module 00:06:05.384 10:33:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:05.384 10:33:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:05.384 10:33:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.384 10:33:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.384 10:33:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.384 10:33:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.384 10:33:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.384 10:33:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.384 10:33:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.384 10:33:35 -- accel/accel.sh@42 -- # jq -r . 00:06:05.384 [2024-12-03 10:33:35.646490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.384 [2024-12-03 10:33:35.646597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59255 ] 00:06:05.384 [2024-12-03 10:33:35.791971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.384 [2024-12-03 10:33:35.976236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.284 10:33:37 -- accel/accel.sh@18 -- # out=' 00:06:07.284 SPDK Configuration: 00:06:07.284 Core mask: 0x1 00:06:07.284 00:06:07.284 Accel Perf Configuration: 00:06:07.284 Workload Type: dif_generate 00:06:07.284 Vector size: 4096 bytes 00:06:07.284 Transfer size: 4096 bytes 00:06:07.284 Block size: 512 bytes 00:06:07.284 Metadata size: 8 bytes 00:06:07.284 Vector count 1 00:06:07.284 Module: software 00:06:07.284 Queue depth: 32 00:06:07.284 Allocate depth: 32 00:06:07.284 # threads/core: 1 00:06:07.284 Run time: 1 seconds 00:06:07.284 Verify: No 00:06:07.284 00:06:07.284 Running for 1 seconds... 00:06:07.285 00:06:07.285 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:07.285 ------------------------------------------------------------------------------------ 00:06:07.285 0,0 153344/s 608 MiB/s 0 0 00:06:07.285 ==================================================================================== 00:06:07.285 Total 153344/s 599 MiB/s 0 0' 00:06:07.285 10:33:37 -- accel/accel.sh@20 -- # IFS=: 00:06:07.285 10:33:37 -- accel/accel.sh@20 -- # read -r var val 00:06:07.285 10:33:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:07.285 10:33:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:07.285 10:33:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.285 10:33:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.285 10:33:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.285 10:33:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.285 10:33:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.285 10:33:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.285 10:33:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.285 10:33:37 -- accel/accel.sh@42 -- # jq -r . 00:06:07.285 [2024-12-03 10:33:37.643123] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.285 [2024-12-03 10:33:37.643236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59271 ] 00:06:07.285 [2024-12-03 10:33:37.790830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.543 [2024-12-03 10:33:37.964098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val= 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val= 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val=0x1 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val= 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val= 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val=dif_generate 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val= 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val=software 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val=32 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val=32 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val=1 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val=No 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val= 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:07.543 10:33:38 -- accel/accel.sh@21 -- # val= 00:06:07.543 10:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # IFS=: 00:06:07.543 10:33:38 -- accel/accel.sh@20 -- # read -r var val 00:06:09.443 10:33:39 -- accel/accel.sh@21 -- # val= 00:06:09.443 10:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.443 10:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:09.443 10:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:09.443 10:33:39 -- accel/accel.sh@21 -- # val= 00:06:09.443 10:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.443 10:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:09.443 10:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:09.443 10:33:39 -- accel/accel.sh@21 -- # val= 00:06:09.443 10:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.443 10:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:09.443 10:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:09.443 10:33:39 -- accel/accel.sh@21 -- # val= 00:06:09.443 10:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.443 10:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:09.443 10:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:09.443 10:33:39 -- accel/accel.sh@21 -- # val= 00:06:09.443 10:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.443 10:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:09.443 10:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:09.443 10:33:39 -- accel/accel.sh@21 -- # val= 00:06:09.443 10:33:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.443 10:33:39 -- accel/accel.sh@20 -- # IFS=: 00:06:09.443 10:33:39 -- accel/accel.sh@20 -- # read -r var val 00:06:09.443 10:33:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.443 10:33:39 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:09.443 10:33:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.443 00:06:09.443 real 0m3.989s 00:06:09.443 user 0m3.509s 00:06:09.443 sys 0m0.270s 00:06:09.443 10:33:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.443 10:33:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.443 ************************************ 00:06:09.443 END TEST accel_dif_generate 00:06:09.443 ************************************ 00:06:09.443 10:33:39 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:09.443 10:33:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:09.443 10:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.443 10:33:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.443 ************************************ 00:06:09.443 START TEST accel_dif_generate_copy 00:06:09.443 ************************************ 00:06:09.443 10:33:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:09.443 10:33:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.443 10:33:39 -- accel/accel.sh@17 -- # local accel_module 00:06:09.443 10:33:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:09.443 10:33:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:09.443 10:33:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.443 10:33:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.443 10:33:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.443 10:33:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.443 10:33:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.443 10:33:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.443 10:33:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.443 10:33:39 -- accel/accel.sh@42 -- # jq -r . 00:06:09.443 [2024-12-03 10:33:39.677585] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.443 [2024-12-03 10:33:39.677683] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59312 ] 00:06:09.443 [2024-12-03 10:33:39.818182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.443 [2024-12-03 10:33:39.994730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.344 10:33:41 -- accel/accel.sh@18 -- # out=' 00:06:11.344 SPDK Configuration: 00:06:11.344 Core mask: 0x1 00:06:11.344 00:06:11.344 Accel Perf Configuration: 00:06:11.344 Workload Type: dif_generate_copy 00:06:11.344 Vector size: 4096 bytes 00:06:11.344 Transfer size: 4096 bytes 00:06:11.344 Vector count 1 00:06:11.344 Module: software 00:06:11.344 Queue depth: 32 00:06:11.344 Allocate depth: 32 00:06:11.344 # threads/core: 1 00:06:11.344 Run time: 1 seconds 00:06:11.344 Verify: No 00:06:11.344 00:06:11.344 Running for 1 seconds... 00:06:11.344 00:06:11.344 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:11.344 ------------------------------------------------------------------------------------ 00:06:11.344 0,0 117440/s 465 MiB/s 0 0 00:06:11.344 ==================================================================================== 00:06:11.344 Total 117440/s 458 MiB/s 0 0' 00:06:11.344 10:33:41 -- accel/accel.sh@20 -- # IFS=: 00:06:11.344 10:33:41 -- accel/accel.sh@20 -- # read -r var val 00:06:11.344 10:33:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:11.344 10:33:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:11.344 10:33:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.344 10:33:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.344 10:33:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.344 10:33:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.344 10:33:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.344 10:33:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.344 10:33:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.344 10:33:41 -- accel/accel.sh@42 -- # jq -r . 00:06:11.344 [2024-12-03 10:33:41.668041] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.344 [2024-12-03 10:33:41.668133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59338 ] 00:06:11.344 [2024-12-03 10:33:41.809798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.603 [2024-12-03 10:33:41.999051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val= 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val= 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val=0x1 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val= 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val= 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val= 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val=software 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val=32 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val=32 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val=1 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val=No 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val= 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.603 10:33:42 -- accel/accel.sh@21 -- # val= 00:06:11.603 10:33:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.603 10:33:42 -- accel/accel.sh@20 -- # read -r var val 00:06:13.503 10:33:43 -- accel/accel.sh@21 -- # val= 00:06:13.503 10:33:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.503 10:33:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.503 10:33:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.503 10:33:43 -- accel/accel.sh@21 -- # val= 00:06:13.503 10:33:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.503 10:33:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.503 10:33:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.503 10:33:43 -- accel/accel.sh@21 -- # val= 00:06:13.503 10:33:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.503 10:33:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.503 10:33:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.503 10:33:43 -- accel/accel.sh@21 -- # val= 00:06:13.503 10:33:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.503 10:33:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.503 10:33:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.503 10:33:43 -- accel/accel.sh@21 -- # val= 00:06:13.503 10:33:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.503 10:33:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.503 10:33:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.503 10:33:43 -- accel/accel.sh@21 -- # val= 00:06:13.503 10:33:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.503 10:33:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.503 10:33:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.503 10:33:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:13.503 10:33:43 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:13.503 ************************************ 00:06:13.503 END TEST accel_dif_generate_copy 00:06:13.504 ************************************ 00:06:13.504 10:33:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.504 00:06:13.504 real 0m3.997s 00:06:13.504 user 0m3.533s 00:06:13.504 sys 0m0.257s 00:06:13.504 10:33:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.504 10:33:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.504 10:33:43 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:13.504 10:33:43 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:13.504 10:33:43 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:13.504 10:33:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.504 10:33:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.504 ************************************ 00:06:13.504 START TEST accel_comp 00:06:13.504 ************************************ 00:06:13.504 10:33:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:13.504 10:33:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.504 10:33:43 -- accel/accel.sh@17 -- # local accel_module 00:06:13.504 10:33:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:13.504 10:33:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:13.504 10:33:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.504 10:33:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.504 10:33:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.504 10:33:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.504 10:33:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.504 10:33:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.504 10:33:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.504 10:33:43 -- accel/accel.sh@42 -- # jq -r . 00:06:13.504 [2024-12-03 10:33:43.715873] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.504 [2024-12-03 10:33:43.715954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59379 ] 00:06:13.504 [2024-12-03 10:33:43.857986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.504 [2024-12-03 10:33:44.034387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.399 10:33:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:15.400 00:06:15.400 SPDK Configuration: 00:06:15.400 Core mask: 0x1 00:06:15.400 00:06:15.400 Accel Perf Configuration: 00:06:15.400 Workload Type: compress 00:06:15.400 Transfer size: 4096 bytes 00:06:15.400 Vector count 1 00:06:15.400 Module: software 00:06:15.400 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:15.400 Queue depth: 32 00:06:15.400 Allocate depth: 32 00:06:15.400 # threads/core: 1 00:06:15.400 Run time: 1 seconds 00:06:15.400 Verify: No 00:06:15.400 00:06:15.400 Running for 1 seconds... 00:06:15.400 00:06:15.400 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.400 ------------------------------------------------------------------------------------ 00:06:15.400 0,0 61280/s 255 MiB/s 0 0 00:06:15.400 ==================================================================================== 00:06:15.400 Total 61280/s 239 MiB/s 0 0' 00:06:15.400 10:33:45 -- accel/accel.sh@20 -- # IFS=: 00:06:15.400 10:33:45 -- accel/accel.sh@20 -- # read -r var val 00:06:15.400 10:33:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:15.400 10:33:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:15.400 10:33:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.400 10:33:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.400 10:33:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.400 10:33:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.400 10:33:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.400 10:33:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.400 10:33:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.400 10:33:45 -- accel/accel.sh@42 -- # jq -r . 00:06:15.400 [2024-12-03 10:33:45.735515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.400 [2024-12-03 10:33:45.735622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59405 ] 00:06:15.400 [2024-12-03 10:33:45.882203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.657 [2024-12-03 10:33:46.063160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.657 10:33:46 -- accel/accel.sh@21 -- # val= 00:06:15.657 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.657 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val= 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val= 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val=0x1 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val= 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val= 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val=compress 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val= 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val=software 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val=32 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val=32 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val=1 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val=No 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val= 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.658 10:33:46 -- accel/accel.sh@21 -- # val= 00:06:15.658 10:33:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.658 10:33:46 -- accel/accel.sh@20 -- # read -r var val 00:06:17.578 10:33:47 -- accel/accel.sh@21 -- # val= 00:06:17.578 10:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.578 10:33:47 -- accel/accel.sh@20 -- # IFS=: 00:06:17.578 10:33:47 -- accel/accel.sh@20 -- # read -r var val 00:06:17.578 10:33:47 -- accel/accel.sh@21 -- # val= 00:06:17.578 10:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.578 10:33:47 -- accel/accel.sh@20 -- # IFS=: 00:06:17.578 10:33:47 -- accel/accel.sh@20 -- # read -r var val 00:06:17.578 10:33:47 -- accel/accel.sh@21 -- # val= 00:06:17.578 10:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.578 10:33:47 -- accel/accel.sh@20 -- # IFS=: 00:06:17.578 10:33:47 -- accel/accel.sh@20 -- # read -r var val 00:06:17.578 10:33:47 -- accel/accel.sh@21 -- # val= 00:06:17.578 10:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.578 10:33:47 -- accel/accel.sh@20 -- # IFS=: 00:06:17.578 10:33:47 -- accel/accel.sh@20 -- # read -r var val 00:06:17.578 10:33:47 -- accel/accel.sh@21 -- # val= 00:06:17.578 10:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.578 10:33:47 -- accel/accel.sh@20 -- # IFS=: 00:06:17.578 10:33:47 -- accel/accel.sh@20 -- # read -r var val 00:06:17.578 10:33:47 -- accel/accel.sh@21 -- # val= 00:06:17.578 10:33:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.578 10:33:47 -- accel/accel.sh@20 -- # IFS=: 00:06:17.578 10:33:47 -- accel/accel.sh@20 -- # read -r var val 00:06:17.578 10:33:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.578 10:33:47 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:17.578 10:33:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.578 00:06:17.578 real 0m4.042s 00:06:17.578 user 0m3.572s 00:06:17.578 sys 0m0.263s 00:06:17.578 ************************************ 00:06:17.578 END TEST accel_comp 00:06:17.578 ************************************ 00:06:17.578 10:33:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.578 10:33:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.578 10:33:47 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:17.578 10:33:47 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:17.578 10:33:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.578 10:33:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.578 ************************************ 00:06:17.578 START TEST accel_decomp 00:06:17.578 ************************************ 00:06:17.578 10:33:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:17.578 10:33:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.578 10:33:47 -- accel/accel.sh@17 -- # local accel_module 00:06:17.578 10:33:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:17.578 10:33:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:17.578 10:33:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.578 10:33:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.578 10:33:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.578 10:33:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.578 10:33:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.578 10:33:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.578 10:33:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.578 10:33:47 -- accel/accel.sh@42 -- # jq -r . 00:06:17.578 [2024-12-03 10:33:47.799114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.579 [2024-12-03 10:33:47.799249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59446 ] 00:06:17.579 [2024-12-03 10:33:47.942963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.579 [2024-12-03 10:33:48.126540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.514 10:33:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:19.515 00:06:19.515 SPDK Configuration: 00:06:19.515 Core mask: 0x1 00:06:19.515 00:06:19.515 Accel Perf Configuration: 00:06:19.515 Workload Type: decompress 00:06:19.515 Transfer size: 4096 bytes 00:06:19.515 Vector count 1 00:06:19.515 Module: software 00:06:19.515 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.515 Queue depth: 32 00:06:19.515 Allocate depth: 32 00:06:19.515 # threads/core: 1 00:06:19.515 Run time: 1 seconds 00:06:19.515 Verify: Yes 00:06:19.515 00:06:19.515 Running for 1 seconds... 00:06:19.515 00:06:19.515 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:19.515 ------------------------------------------------------------------------------------ 00:06:19.515 0,0 77536/s 142 MiB/s 0 0 00:06:19.515 ==================================================================================== 00:06:19.515 Total 77536/s 302 MiB/s 0 0' 00:06:19.515 10:33:49 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 10:33:49 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 10:33:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:19.515 10:33:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:19.515 10:33:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.515 10:33:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.515 10:33:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.515 10:33:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.515 10:33:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.515 10:33:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.515 10:33:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.515 10:33:49 -- accel/accel.sh@42 -- # jq -r . 00:06:19.515 [2024-12-03 10:33:49.823355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.515 [2024-12-03 10:33:49.823465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59472 ] 00:06:19.515 [2024-12-03 10:33:49.970611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.773 [2024-12-03 10:33:50.155173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val= 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val= 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val= 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val=0x1 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val= 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val= 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val=decompress 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val= 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val=software 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val=32 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val=32 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val=1 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val=Yes 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val= 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.773 10:33:50 -- accel/accel.sh@21 -- # val= 00:06:19.773 10:33:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.773 10:33:50 -- accel/accel.sh@20 -- # read -r var val 00:06:21.674 10:33:51 -- accel/accel.sh@21 -- # val= 00:06:21.674 10:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.674 10:33:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.674 10:33:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.674 10:33:51 -- accel/accel.sh@21 -- # val= 00:06:21.674 10:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.674 10:33:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.674 10:33:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.674 10:33:51 -- accel/accel.sh@21 -- # val= 00:06:21.674 10:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.674 10:33:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.674 10:33:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.674 10:33:51 -- accel/accel.sh@21 -- # val= 00:06:21.674 10:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.674 10:33:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.674 10:33:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.674 10:33:51 -- accel/accel.sh@21 -- # val= 00:06:21.674 10:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.674 10:33:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.674 10:33:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.674 10:33:51 -- accel/accel.sh@21 -- # val= 00:06:21.674 10:33:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.674 10:33:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.674 10:33:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.674 10:33:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:21.674 10:33:51 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:21.674 10:33:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.674 00:06:21.674 real 0m4.046s 00:06:21.674 user 0m3.565s 00:06:21.674 sys 0m0.270s 00:06:21.674 10:33:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.674 10:33:51 -- common/autotest_common.sh@10 -- # set +x 00:06:21.674 ************************************ 00:06:21.674 END TEST accel_decomp 00:06:21.674 ************************************ 00:06:21.674 10:33:51 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:21.674 10:33:51 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:21.674 10:33:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.674 10:33:51 -- common/autotest_common.sh@10 -- # set +x 00:06:21.674 ************************************ 00:06:21.674 START TEST accel_decmop_full 00:06:21.674 ************************************ 00:06:21.675 10:33:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:21.675 10:33:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.675 10:33:51 -- accel/accel.sh@17 -- # local accel_module 00:06:21.675 10:33:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:21.675 10:33:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:21.675 10:33:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.675 10:33:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.675 10:33:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.675 10:33:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.675 10:33:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.675 10:33:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.675 10:33:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.675 10:33:51 -- accel/accel.sh@42 -- # jq -r . 00:06:21.675 [2024-12-03 10:33:51.886577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.675 [2024-12-03 10:33:51.886655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59513 ] 00:06:21.675 [2024-12-03 10:33:52.027793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.675 [2024-12-03 10:33:52.201561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.575 10:33:53 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:23.575 00:06:23.575 SPDK Configuration: 00:06:23.575 Core mask: 0x1 00:06:23.575 00:06:23.575 Accel Perf Configuration: 00:06:23.575 Workload Type: decompress 00:06:23.575 Transfer size: 111250 bytes 00:06:23.575 Vector count 1 00:06:23.575 Module: software 00:06:23.575 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:23.575 Queue depth: 32 00:06:23.575 Allocate depth: 32 00:06:23.575 # threads/core: 1 00:06:23.575 Run time: 1 seconds 00:06:23.575 Verify: Yes 00:06:23.575 00:06:23.575 Running for 1 seconds... 00:06:23.575 00:06:23.575 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.575 ------------------------------------------------------------------------------------ 00:06:23.575 0,0 5408/s 223 MiB/s 0 0 00:06:23.575 ==================================================================================== 00:06:23.575 Total 5408/s 573 MiB/s 0 0' 00:06:23.575 10:33:53 -- accel/accel.sh@20 -- # IFS=: 00:06:23.575 10:33:53 -- accel/accel.sh@20 -- # read -r var val 00:06:23.575 10:33:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:23.575 10:33:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:23.575 10:33:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.575 10:33:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.575 10:33:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.575 10:33:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.575 10:33:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.575 10:33:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.575 10:33:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.575 10:33:53 -- accel/accel.sh@42 -- # jq -r . 00:06:23.575 [2024-12-03 10:33:53.908071] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.575 [2024-12-03 10:33:53.908194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59539 ] 00:06:23.575 [2024-12-03 10:33:54.058471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.834 [2024-12-03 10:33:54.233838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val= 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val= 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val= 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val=0x1 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val= 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val= 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val=decompress 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val= 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val=software 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val=32 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val=32 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val=1 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val=Yes 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.834 10:33:54 -- accel/accel.sh@21 -- # val= 00:06:23.834 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.834 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.835 10:33:54 -- accel/accel.sh@21 -- # val= 00:06:23.835 10:33:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.835 10:33:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.835 10:33:54 -- accel/accel.sh@20 -- # read -r var val 00:06:25.733 10:33:55 -- accel/accel.sh@21 -- # val= 00:06:25.733 10:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.733 10:33:55 -- accel/accel.sh@20 -- # IFS=: 00:06:25.733 10:33:55 -- accel/accel.sh@20 -- # read -r var val 00:06:25.733 10:33:55 -- accel/accel.sh@21 -- # val= 00:06:25.733 10:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.733 10:33:55 -- accel/accel.sh@20 -- # IFS=: 00:06:25.733 10:33:55 -- accel/accel.sh@20 -- # read -r var val 00:06:25.733 10:33:55 -- accel/accel.sh@21 -- # val= 00:06:25.733 10:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.733 10:33:55 -- accel/accel.sh@20 -- # IFS=: 00:06:25.733 10:33:55 -- accel/accel.sh@20 -- # read -r var val 00:06:25.733 10:33:55 -- accel/accel.sh@21 -- # val= 00:06:25.733 10:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.733 10:33:55 -- accel/accel.sh@20 -- # IFS=: 00:06:25.733 10:33:55 -- accel/accel.sh@20 -- # read -r var val 00:06:25.733 10:33:55 -- accel/accel.sh@21 -- # val= 00:06:25.733 10:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.733 10:33:55 -- accel/accel.sh@20 -- # IFS=: 00:06:25.733 10:33:55 -- accel/accel.sh@20 -- # read -r var val 00:06:25.733 10:33:55 -- accel/accel.sh@21 -- # val= 00:06:25.733 10:33:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.733 10:33:55 -- accel/accel.sh@20 -- # IFS=: 00:06:25.733 10:33:55 -- accel/accel.sh@20 -- # read -r var val 00:06:25.733 10:33:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:25.733 10:33:55 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:25.733 10:33:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.733 00:06:25.733 real 0m4.049s 00:06:25.733 user 0m3.578s 00:06:25.733 sys 0m0.259s 00:06:25.733 10:33:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.733 10:33:55 -- common/autotest_common.sh@10 -- # set +x 00:06:25.733 ************************************ 00:06:25.733 END TEST accel_decmop_full 00:06:25.733 ************************************ 00:06:25.733 10:33:55 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:25.733 10:33:55 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:25.733 10:33:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.733 10:33:55 -- common/autotest_common.sh@10 -- # set +x 00:06:25.733 ************************************ 00:06:25.733 START TEST accel_decomp_mcore 00:06:25.733 ************************************ 00:06:25.733 10:33:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:25.733 10:33:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.733 10:33:55 -- accel/accel.sh@17 -- # local accel_module 00:06:25.733 10:33:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:25.733 10:33:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:25.733 10:33:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.733 10:33:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.733 10:33:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.733 10:33:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.733 10:33:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.733 10:33:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.733 10:33:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.733 10:33:55 -- accel/accel.sh@42 -- # jq -r . 00:06:25.733 [2024-12-03 10:33:55.977630] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.733 [2024-12-03 10:33:55.977724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59580 ] 00:06:25.733 [2024-12-03 10:33:56.122816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.733 [2024-12-03 10:33:56.301604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.733 [2024-12-03 10:33:56.301739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.733 [2024-12-03 10:33:56.302361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.733 [2024-12-03 10:33:56.302369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.689 10:33:57 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:27.689 00:06:27.689 SPDK Configuration: 00:06:27.690 Core mask: 0xf 00:06:27.690 00:06:27.690 Accel Perf Configuration: 00:06:27.690 Workload Type: decompress 00:06:27.690 Transfer size: 4096 bytes 00:06:27.690 Vector count 1 00:06:27.690 Module: software 00:06:27.690 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:27.690 Queue depth: 32 00:06:27.690 Allocate depth: 32 00:06:27.690 # threads/core: 1 00:06:27.690 Run time: 1 seconds 00:06:27.690 Verify: Yes 00:06:27.690 00:06:27.690 Running for 1 seconds... 00:06:27.690 00:06:27.690 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.690 ------------------------------------------------------------------------------------ 00:06:27.690 0,0 68576/s 126 MiB/s 0 0 00:06:27.690 3,0 52544/s 96 MiB/s 0 0 00:06:27.690 2,0 52640/s 97 MiB/s 0 0 00:06:27.690 1,0 51744/s 95 MiB/s 0 0 00:06:27.690 ==================================================================================== 00:06:27.690 Total 225504/s 880 MiB/s 0 0' 00:06:27.690 10:33:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:27.690 10:33:57 -- accel/accel.sh@20 -- # IFS=: 00:06:27.690 10:33:57 -- accel/accel.sh@20 -- # read -r var val 00:06:27.690 10:33:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:27.690 10:33:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.690 10:33:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.690 10:33:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.690 10:33:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.690 10:33:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.690 10:33:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.690 10:33:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.690 10:33:57 -- accel/accel.sh@42 -- # jq -r . 00:06:27.690 [2024-12-03 10:33:58.022758] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.690 [2024-12-03 10:33:58.023176] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59609 ] 00:06:27.690 [2024-12-03 10:33:58.182695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.947 [2024-12-03 10:33:58.366248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.947 [2024-12-03 10:33:58.366432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.947 [2024-12-03 10:33:58.366977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.947 [2024-12-03 10:33:58.366987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.947 10:33:58 -- accel/accel.sh@21 -- # val= 00:06:27.947 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.947 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.947 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.947 10:33:58 -- accel/accel.sh@21 -- # val= 00:06:27.947 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.947 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.947 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.947 10:33:58 -- accel/accel.sh@21 -- # val= 00:06:27.947 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.947 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.947 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.947 10:33:58 -- accel/accel.sh@21 -- # val=0xf 00:06:27.947 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.947 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.947 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.947 10:33:58 -- accel/accel.sh@21 -- # val= 00:06:27.947 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.947 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.947 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.947 10:33:58 -- accel/accel.sh@21 -- # val= 00:06:27.947 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.947 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.947 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.947 10:33:58 -- accel/accel.sh@21 -- # val=decompress 00:06:27.947 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.947 10:33:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:27.947 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.947 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.947 10:33:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:27.948 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.948 10:33:58 -- accel/accel.sh@21 -- # val= 00:06:27.948 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.948 10:33:58 -- accel/accel.sh@21 -- # val=software 00:06:27.948 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.948 10:33:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.948 10:33:58 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:27.948 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.948 10:33:58 -- accel/accel.sh@21 -- # val=32 00:06:27.948 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.948 10:33:58 -- accel/accel.sh@21 -- # val=32 00:06:27.948 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.948 10:33:58 -- accel/accel.sh@21 -- # val=1 00:06:27.948 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.948 10:33:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:27.948 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.948 10:33:58 -- accel/accel.sh@21 -- # val=Yes 00:06:27.948 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.948 10:33:58 -- accel/accel.sh@21 -- # val= 00:06:27.948 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.948 10:33:58 -- accel/accel.sh@21 -- # val= 00:06:27.948 10:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.948 10:33:58 -- accel/accel.sh@20 -- # read -r var val 00:06:29.842 10:34:00 -- accel/accel.sh@21 -- # val= 00:06:29.842 10:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # IFS=: 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # read -r var val 00:06:29.842 10:34:00 -- accel/accel.sh@21 -- # val= 00:06:29.842 10:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # IFS=: 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # read -r var val 00:06:29.842 10:34:00 -- accel/accel.sh@21 -- # val= 00:06:29.842 10:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # IFS=: 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # read -r var val 00:06:29.842 10:34:00 -- accel/accel.sh@21 -- # val= 00:06:29.842 10:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # IFS=: 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # read -r var val 00:06:29.842 10:34:00 -- accel/accel.sh@21 -- # val= 00:06:29.842 10:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # IFS=: 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # read -r var val 00:06:29.842 10:34:00 -- accel/accel.sh@21 -- # val= 00:06:29.842 10:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # IFS=: 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # read -r var val 00:06:29.842 10:34:00 -- accel/accel.sh@21 -- # val= 00:06:29.842 10:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # IFS=: 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # read -r var val 00:06:29.842 10:34:00 -- accel/accel.sh@21 -- # val= 00:06:29.842 10:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # IFS=: 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # read -r var val 00:06:29.842 10:34:00 -- accel/accel.sh@21 -- # val= 00:06:29.842 10:34:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # IFS=: 00:06:29.842 10:34:00 -- accel/accel.sh@20 -- # read -r var val 00:06:29.842 10:34:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.842 10:34:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:29.842 10:34:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.842 00:06:29.842 real 0m4.108s 00:06:29.842 user 0m12.131s 00:06:29.842 sys 0m0.320s 00:06:29.843 10:34:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.843 10:34:00 -- common/autotest_common.sh@10 -- # set +x 00:06:29.843 ************************************ 00:06:29.843 END TEST accel_decomp_mcore 00:06:29.843 ************************************ 00:06:29.843 10:34:00 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.843 10:34:00 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:29.843 10:34:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.843 10:34:00 -- common/autotest_common.sh@10 -- # set +x 00:06:29.843 ************************************ 00:06:29.843 START TEST accel_decomp_full_mcore 00:06:29.843 ************************************ 00:06:29.843 10:34:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.843 10:34:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.843 10:34:00 -- accel/accel.sh@17 -- # local accel_module 00:06:29.843 10:34:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.843 10:34:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.843 10:34:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.843 10:34:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.843 10:34:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.843 10:34:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.843 10:34:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.843 10:34:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.843 10:34:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.843 10:34:00 -- accel/accel.sh@42 -- # jq -r . 00:06:29.843 [2024-12-03 10:34:00.127214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.843 [2024-12-03 10:34:00.127331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59659 ] 00:06:29.843 [2024-12-03 10:34:00.275297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.100 [2024-12-03 10:34:00.453345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.100 [2024-12-03 10:34:00.453471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.100 [2024-12-03 10:34:00.453726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.100 [2024-12-03 10:34:00.453779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.997 10:34:02 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:31.997 00:06:31.997 SPDK Configuration: 00:06:31.997 Core mask: 0xf 00:06:31.997 00:06:31.997 Accel Perf Configuration: 00:06:31.997 Workload Type: decompress 00:06:31.997 Transfer size: 111250 bytes 00:06:31.997 Vector count 1 00:06:31.997 Module: software 00:06:31.997 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.997 Queue depth: 32 00:06:31.997 Allocate depth: 32 00:06:31.997 # threads/core: 1 00:06:31.997 Run time: 1 seconds 00:06:31.997 Verify: Yes 00:06:31.997 00:06:31.997 Running for 1 seconds... 00:06:31.997 00:06:31.997 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.997 ------------------------------------------------------------------------------------ 00:06:31.997 0,0 5344/s 220 MiB/s 0 0 00:06:31.997 3,0 4256/s 175 MiB/s 0 0 00:06:31.997 2,0 4256/s 175 MiB/s 0 0 00:06:31.997 1,0 4192/s 173 MiB/s 0 0 00:06:31.997 ==================================================================================== 00:06:31.997 Total 18048/s 1914 MiB/s 0 0' 00:06:31.997 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.997 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.997 10:34:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.997 10:34:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.997 10:34:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.997 10:34:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.997 10:34:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.997 10:34:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.997 10:34:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.997 10:34:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.997 10:34:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.997 10:34:02 -- accel/accel.sh@42 -- # jq -r . 00:06:31.997 [2024-12-03 10:34:02.195415] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.997 [2024-12-03 10:34:02.195641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59691 ] 00:06:31.997 [2024-12-03 10:34:02.343094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.997 [2024-12-03 10:34:02.519306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.997 [2024-12-03 10:34:02.519411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.997 [2024-12-03 10:34:02.519650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.997 [2024-12-03 10:34:02.519663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val= 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val= 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val= 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val=0xf 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val= 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val= 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val=decompress 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val= 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val=software 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val=32 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val=32 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val=1 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val=Yes 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val= 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.255 10:34:02 -- accel/accel.sh@21 -- # val= 00:06:32.255 10:34:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # IFS=: 00:06:32.255 10:34:02 -- accel/accel.sh@20 -- # read -r var val 00:06:33.631 10:34:04 -- accel/accel.sh@21 -- # val= 00:06:33.631 10:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:33.631 10:34:04 -- accel/accel.sh@21 -- # val= 00:06:33.631 10:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:33.631 10:34:04 -- accel/accel.sh@21 -- # val= 00:06:33.631 10:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:33.631 10:34:04 -- accel/accel.sh@21 -- # val= 00:06:33.631 10:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:33.631 10:34:04 -- accel/accel.sh@21 -- # val= 00:06:33.631 10:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:33.631 10:34:04 -- accel/accel.sh@21 -- # val= 00:06:33.631 10:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:33.631 10:34:04 -- accel/accel.sh@21 -- # val= 00:06:33.631 10:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:33.631 10:34:04 -- accel/accel.sh@21 -- # val= 00:06:33.631 10:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:33.631 10:34:04 -- accel/accel.sh@21 -- # val= 00:06:33.631 10:34:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # IFS=: 00:06:33.631 10:34:04 -- accel/accel.sh@20 -- # read -r var val 00:06:33.631 ************************************ 00:06:33.631 END TEST accel_decomp_full_mcore 00:06:33.631 ************************************ 00:06:33.631 10:34:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.631 10:34:04 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:33.631 10:34:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.631 00:06:33.631 real 0m4.116s 00:06:33.631 user 0m6.157s 00:06:33.631 sys 0m0.163s 00:06:33.631 10:34:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.631 10:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:33.969 10:34:04 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.969 10:34:04 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:33.969 10:34:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.969 10:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:33.969 ************************************ 00:06:33.969 START TEST accel_decomp_mthread 00:06:33.969 ************************************ 00:06:33.969 10:34:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.969 10:34:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.969 10:34:04 -- accel/accel.sh@17 -- # local accel_module 00:06:33.969 10:34:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.969 10:34:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.969 10:34:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.969 10:34:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.969 10:34:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.969 10:34:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.969 10:34:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.969 10:34:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.969 10:34:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.969 10:34:04 -- accel/accel.sh@42 -- # jq -r . 00:06:33.969 [2024-12-03 10:34:04.280676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.969 [2024-12-03 10:34:04.280754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59735 ] 00:06:33.969 [2024-12-03 10:34:04.422191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.227 [2024-12-03 10:34:04.598542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.125 10:34:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:36.125 00:06:36.125 SPDK Configuration: 00:06:36.125 Core mask: 0x1 00:06:36.125 00:06:36.125 Accel Perf Configuration: 00:06:36.125 Workload Type: decompress 00:06:36.125 Transfer size: 4096 bytes 00:06:36.125 Vector count 1 00:06:36.125 Module: software 00:06:36.125 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:36.125 Queue depth: 32 00:06:36.125 Allocate depth: 32 00:06:36.125 # threads/core: 2 00:06:36.125 Run time: 1 seconds 00:06:36.125 Verify: Yes 00:06:36.125 00:06:36.125 Running for 1 seconds... 00:06:36.125 00:06:36.125 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.125 ------------------------------------------------------------------------------------ 00:06:36.125 0,1 40544/s 74 MiB/s 0 0 00:06:36.125 0,0 40416/s 74 MiB/s 0 0 00:06:36.125 ==================================================================================== 00:06:36.125 Total 80960/s 316 MiB/s 0 0' 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.125 10:34:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:36.125 10:34:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:36.125 10:34:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.125 10:34:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.125 10:34:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.125 10:34:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.125 10:34:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.125 10:34:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.125 10:34:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.125 10:34:06 -- accel/accel.sh@42 -- # jq -r . 00:06:36.125 [2024-12-03 10:34:06.274879] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.125 [2024-12-03 10:34:06.275176] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59761 ] 00:06:36.125 [2024-12-03 10:34:06.418207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.125 [2024-12-03 10:34:06.599918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.125 10:34:06 -- accel/accel.sh@21 -- # val= 00:06:36.125 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.125 10:34:06 -- accel/accel.sh@21 -- # val= 00:06:36.125 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.125 10:34:06 -- accel/accel.sh@21 -- # val= 00:06:36.125 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.125 10:34:06 -- accel/accel.sh@21 -- # val=0x1 00:06:36.125 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.125 10:34:06 -- accel/accel.sh@21 -- # val= 00:06:36.125 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.125 10:34:06 -- accel/accel.sh@21 -- # val= 00:06:36.125 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.125 10:34:06 -- accel/accel.sh@21 -- # val=decompress 00:06:36.125 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.125 10:34:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.125 10:34:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.125 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.125 10:34:06 -- accel/accel.sh@21 -- # val= 00:06:36.125 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.125 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.384 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.384 10:34:06 -- accel/accel.sh@21 -- # val=software 00:06:36.384 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.384 10:34:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.384 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.384 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.384 10:34:06 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:36.384 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.384 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.384 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.384 10:34:06 -- accel/accel.sh@21 -- # val=32 00:06:36.384 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.384 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.384 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.384 10:34:06 -- accel/accel.sh@21 -- # val=32 00:06:36.384 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.384 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.384 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.384 10:34:06 -- accel/accel.sh@21 -- # val=2 00:06:36.384 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.384 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.384 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.384 10:34:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.384 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.384 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.384 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.384 10:34:06 -- accel/accel.sh@21 -- # val=Yes 00:06:36.384 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.384 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.385 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.385 10:34:06 -- accel/accel.sh@21 -- # val= 00:06:36.385 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.385 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.385 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:36.385 10:34:06 -- accel/accel.sh@21 -- # val= 00:06:36.385 10:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.385 10:34:06 -- accel/accel.sh@20 -- # IFS=: 00:06:36.385 10:34:06 -- accel/accel.sh@20 -- # read -r var val 00:06:37.762 10:34:08 -- accel/accel.sh@21 -- # val= 00:06:37.762 10:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.762 10:34:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.762 10:34:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.762 10:34:08 -- accel/accel.sh@21 -- # val= 00:06:37.762 10:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.762 10:34:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.762 10:34:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.762 10:34:08 -- accel/accel.sh@21 -- # val= 00:06:37.762 10:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.762 10:34:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.762 10:34:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.762 10:34:08 -- accel/accel.sh@21 -- # val= 00:06:37.762 10:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.762 10:34:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.762 10:34:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.762 10:34:08 -- accel/accel.sh@21 -- # val= 00:06:37.762 10:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.762 10:34:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.762 10:34:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.762 10:34:08 -- accel/accel.sh@21 -- # val= 00:06:37.762 10:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.762 10:34:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.762 10:34:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.762 10:34:08 -- accel/accel.sh@21 -- # val= 00:06:37.762 10:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.762 10:34:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.762 10:34:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.762 10:34:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.762 10:34:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:37.762 10:34:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.762 00:06:37.762 real 0m4.008s 00:06:37.762 user 0m3.531s 00:06:37.762 sys 0m0.271s 00:06:37.762 ************************************ 00:06:37.762 END TEST accel_decomp_mthread 00:06:37.762 ************************************ 00:06:37.762 10:34:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.762 10:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:37.762 10:34:08 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.762 10:34:08 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:37.762 10:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.762 10:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:37.762 ************************************ 00:06:37.762 START TEST accel_deomp_full_mthread 00:06:37.762 ************************************ 00:06:37.762 10:34:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.762 10:34:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.762 10:34:08 -- accel/accel.sh@17 -- # local accel_module 00:06:37.762 10:34:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.762 10:34:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.762 10:34:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.762 10:34:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.762 10:34:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.762 10:34:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.762 10:34:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.762 10:34:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.762 10:34:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.762 10:34:08 -- accel/accel.sh@42 -- # jq -r . 00:06:37.762 [2024-12-03 10:34:08.333045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.762 [2024-12-03 10:34:08.333160] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59802 ] 00:06:38.019 [2024-12-03 10:34:08.479118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.275 [2024-12-03 10:34:08.654596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.176 10:34:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:40.176 00:06:40.176 SPDK Configuration: 00:06:40.176 Core mask: 0x1 00:06:40.176 00:06:40.176 Accel Perf Configuration: 00:06:40.176 Workload Type: decompress 00:06:40.176 Transfer size: 111250 bytes 00:06:40.176 Vector count 1 00:06:40.176 Module: software 00:06:40.176 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.176 Queue depth: 32 00:06:40.176 Allocate depth: 32 00:06:40.176 # threads/core: 2 00:06:40.176 Run time: 1 seconds 00:06:40.176 Verify: Yes 00:06:40.176 00:06:40.176 Running for 1 seconds... 00:06:40.176 00:06:40.176 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.176 ------------------------------------------------------------------------------------ 00:06:40.176 0,1 2752/s 113 MiB/s 0 0 00:06:40.176 0,0 2720/s 112 MiB/s 0 0 00:06:40.176 ==================================================================================== 00:06:40.176 Total 5472/s 580 MiB/s 0 0' 00:06:40.176 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.176 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.176 10:34:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:40.176 10:34:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.176 10:34:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.176 10:34:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:40.176 10:34:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.176 10:34:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.176 10:34:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.176 10:34:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.176 10:34:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.176 10:34:10 -- accel/accel.sh@42 -- # jq -r . 00:06:40.176 [2024-12-03 10:34:10.371979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.176 [2024-12-03 10:34:10.372101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59828 ] 00:06:40.176 [2024-12-03 10:34:10.517307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.176 [2024-12-03 10:34:10.714216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val= 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val= 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val= 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val=0x1 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val= 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val= 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val=decompress 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val= 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val=software 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val=32 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val=32 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val=2 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val=Yes 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val= 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:40.438 10:34:10 -- accel/accel.sh@21 -- # val= 00:06:40.438 10:34:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # IFS=: 00:06:40.438 10:34:10 -- accel/accel.sh@20 -- # read -r var val 00:06:42.355 10:34:12 -- accel/accel.sh@21 -- # val= 00:06:42.355 10:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.355 10:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.355 10:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.355 10:34:12 -- accel/accel.sh@21 -- # val= 00:06:42.355 10:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.355 10:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.355 10:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.355 10:34:12 -- accel/accel.sh@21 -- # val= 00:06:42.355 10:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.355 10:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.355 10:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.355 10:34:12 -- accel/accel.sh@21 -- # val= 00:06:42.355 10:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.355 10:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.355 10:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.355 10:34:12 -- accel/accel.sh@21 -- # val= 00:06:42.355 10:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.355 10:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.355 10:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.355 10:34:12 -- accel/accel.sh@21 -- # val= 00:06:42.355 10:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.355 10:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.355 10:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.355 10:34:12 -- accel/accel.sh@21 -- # val= 00:06:42.355 10:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.355 10:34:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.355 10:34:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.355 10:34:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.355 10:34:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:42.355 10:34:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.355 ************************************ 00:06:42.355 END TEST accel_deomp_full_mthread 00:06:42.355 ************************************ 00:06:42.355 00:06:42.355 real 0m4.249s 00:06:42.355 user 0m3.744s 00:06:42.355 sys 0m0.294s 00:06:42.355 10:34:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.355 10:34:12 -- common/autotest_common.sh@10 -- # set +x 00:06:42.356 10:34:12 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:42.356 10:34:12 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:42.356 10:34:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:42.356 10:34:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.356 10:34:12 -- common/autotest_common.sh@10 -- # set +x 00:06:42.356 10:34:12 -- accel/accel.sh@129 -- # build_accel_config 00:06:42.356 10:34:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.356 10:34:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.356 10:34:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.356 10:34:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.356 10:34:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.356 10:34:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.356 10:34:12 -- accel/accel.sh@42 -- # jq -r . 00:06:42.356 ************************************ 00:06:42.356 START TEST accel_dif_functional_tests 00:06:42.356 ************************************ 00:06:42.356 10:34:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:42.356 [2024-12-03 10:34:12.636171] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.356 [2024-12-03 10:34:12.636274] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59870 ] 00:06:42.356 [2024-12-03 10:34:12.784427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.618 [2024-12-03 10:34:12.989541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.618 [2024-12-03 10:34:12.989987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.618 [2024-12-03 10:34:12.989995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.880 00:06:42.880 00:06:42.880 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.880 http://cunit.sourceforge.net/ 00:06:42.880 00:06:42.880 00:06:42.880 Suite: accel_dif 00:06:42.880 Test: verify: DIF generated, GUARD check ...passed 00:06:42.880 Test: verify: DIF generated, APPTAG check ...passed 00:06:42.880 Test: verify: DIF generated, REFTAG check ...passed 00:06:42.880 Test: verify: DIF not generated, GUARD check ...passed 00:06:42.880 Test: verify: DIF not generated, APPTAG check ...passed 00:06:42.880 Test: verify: DIF not generated, REFTAG check ...[2024-12-03 10:34:13.229880] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:42.880 [2024-12-03 10:34:13.229959] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:42.880 [2024-12-03 10:34:13.230020] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:42.880 [2024-12-03 10:34:13.230071] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:42.880 passed 00:06:42.880 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:42.880 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:42.880 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:42.880 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:42.880 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:42.880 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-03 10:34:13.230104] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:42.880 [2024-12-03 10:34:13.230130] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:42.880 [2024-12-03 10:34:13.230204] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:42.880 passed 00:06:42.880 Test: generate copy: DIF generated, GUARD check ...passed 00:06:42.880 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:42.880 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:42.880 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-12-03 10:34:13.230404] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:42.880 passed 00:06:42.880 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:42.880 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:42.880 Test: generate copy: iovecs-len validate ...passed 00:06:42.880 Test: generate copy: buffer alignment validate ...passed 00:06:42.880 00:06:42.880 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.880 suites 1 1 n/a 0 0 00:06:42.880 tests 20 20 20 0 0 00:06:42.880 asserts 204 204 204 0 n/a 00:06:42.880 00:06:42.880 Elapsed time = 0.005 seconds 00:06:42.880 [2024-12-03 10:34:13.230934] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:43.826 ************************************ 00:06:43.826 END TEST accel_dif_functional_tests 00:06:43.826 ************************************ 00:06:43.826 00:06:43.826 real 0m1.482s 00:06:43.826 user 0m2.734s 00:06:43.826 sys 0m0.192s 00:06:43.826 10:34:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.826 10:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:43.826 ************************************ 00:06:43.826 END TEST accel 00:06:43.826 ************************************ 00:06:43.826 00:06:43.826 real 1m27.363s 00:06:43.826 user 1m35.279s 00:06:43.826 sys 0m6.817s 00:06:43.826 10:34:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.827 10:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:43.827 10:34:14 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:43.827 10:34:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:43.827 10:34:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.827 10:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:43.827 ************************************ 00:06:43.827 START TEST accel_rpc 00:06:43.827 ************************************ 00:06:43.827 10:34:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:43.827 * Looking for test storage... 00:06:43.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:43.827 10:34:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:43.827 10:34:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:43.827 10:34:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:43.827 10:34:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:43.827 10:34:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:43.827 10:34:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:43.827 10:34:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:43.827 10:34:14 -- scripts/common.sh@335 -- # IFS=.-: 00:06:43.827 10:34:14 -- scripts/common.sh@335 -- # read -ra ver1 00:06:43.827 10:34:14 -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.827 10:34:14 -- scripts/common.sh@336 -- # read -ra ver2 00:06:43.827 10:34:14 -- scripts/common.sh@337 -- # local 'op=<' 00:06:43.827 10:34:14 -- scripts/common.sh@339 -- # ver1_l=2 00:06:43.827 10:34:14 -- scripts/common.sh@340 -- # ver2_l=1 00:06:43.827 10:34:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:43.827 10:34:14 -- scripts/common.sh@343 -- # case "$op" in 00:06:43.827 10:34:14 -- scripts/common.sh@344 -- # : 1 00:06:43.827 10:34:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:43.827 10:34:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.827 10:34:14 -- scripts/common.sh@364 -- # decimal 1 00:06:43.827 10:34:14 -- scripts/common.sh@352 -- # local d=1 00:06:43.827 10:34:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.827 10:34:14 -- scripts/common.sh@354 -- # echo 1 00:06:43.827 10:34:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:43.827 10:34:14 -- scripts/common.sh@365 -- # decimal 2 00:06:43.827 10:34:14 -- scripts/common.sh@352 -- # local d=2 00:06:43.827 10:34:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.827 10:34:14 -- scripts/common.sh@354 -- # echo 2 00:06:43.827 10:34:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:43.827 10:34:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:43.827 10:34:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:43.827 10:34:14 -- scripts/common.sh@367 -- # return 0 00:06:43.827 10:34:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.827 10:34:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.827 --rc genhtml_branch_coverage=1 00:06:43.827 --rc genhtml_function_coverage=1 00:06:43.827 --rc genhtml_legend=1 00:06:43.827 --rc geninfo_all_blocks=1 00:06:43.827 --rc geninfo_unexecuted_blocks=1 00:06:43.827 00:06:43.827 ' 00:06:43.827 10:34:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.827 --rc genhtml_branch_coverage=1 00:06:43.827 --rc genhtml_function_coverage=1 00:06:43.827 --rc genhtml_legend=1 00:06:43.827 --rc geninfo_all_blocks=1 00:06:43.827 --rc geninfo_unexecuted_blocks=1 00:06:43.827 00:06:43.827 ' 00:06:43.827 10:34:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.827 --rc genhtml_branch_coverage=1 00:06:43.827 --rc genhtml_function_coverage=1 00:06:43.827 --rc genhtml_legend=1 00:06:43.827 --rc geninfo_all_blocks=1 00:06:43.827 --rc geninfo_unexecuted_blocks=1 00:06:43.827 00:06:43.827 ' 00:06:43.827 10:34:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.827 --rc genhtml_branch_coverage=1 00:06:43.827 --rc genhtml_function_coverage=1 00:06:43.827 --rc genhtml_legend=1 00:06:43.827 --rc geninfo_all_blocks=1 00:06:43.827 --rc geninfo_unexecuted_blocks=1 00:06:43.827 00:06:43.827 ' 00:06:43.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.827 10:34:14 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:43.827 10:34:14 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59953 00:06:43.827 10:34:14 -- accel/accel_rpc.sh@15 -- # waitforlisten 59953 00:06:43.827 10:34:14 -- common/autotest_common.sh@829 -- # '[' -z 59953 ']' 00:06:43.827 10:34:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.827 10:34:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.827 10:34:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.827 10:34:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.827 10:34:14 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:43.827 10:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:43.827 [2024-12-03 10:34:14.337455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.827 [2024-12-03 10:34:14.337567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59953 ] 00:06:44.107 [2024-12-03 10:34:14.484408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.107 [2024-12-03 10:34:14.689612] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.107 [2024-12-03 10:34:14.689840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.689 10:34:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.689 10:34:15 -- common/autotest_common.sh@862 -- # return 0 00:06:44.689 10:34:15 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:44.689 10:34:15 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:44.689 10:34:15 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:44.689 10:34:15 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:44.689 10:34:15 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:44.689 10:34:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:44.689 10:34:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.689 10:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:44.689 ************************************ 00:06:44.689 START TEST accel_assign_opcode 00:06:44.689 ************************************ 00:06:44.689 10:34:15 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:06:44.689 10:34:15 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:44.689 10:34:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.689 10:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:44.689 [2024-12-03 10:34:15.166554] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:44.689 10:34:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.689 10:34:15 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:44.689 10:34:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.689 10:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:44.689 [2024-12-03 10:34:15.174507] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:44.689 10:34:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.689 10:34:15 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:44.689 10:34:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.689 10:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:45.259 10:34:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.259 10:34:15 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:45.259 10:34:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.259 10:34:15 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:45.259 10:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:45.259 10:34:15 -- accel/accel_rpc.sh@42 -- # grep software 00:06:45.259 10:34:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.259 software 00:06:45.259 ************************************ 00:06:45.259 END TEST accel_assign_opcode 00:06:45.259 ************************************ 00:06:45.259 00:06:45.259 real 0m0.635s 00:06:45.259 user 0m0.032s 00:06:45.259 sys 0m0.012s 00:06:45.259 10:34:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.259 10:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:45.259 10:34:15 -- accel/accel_rpc.sh@55 -- # killprocess 59953 00:06:45.259 10:34:15 -- common/autotest_common.sh@936 -- # '[' -z 59953 ']' 00:06:45.259 10:34:15 -- common/autotest_common.sh@940 -- # kill -0 59953 00:06:45.259 10:34:15 -- common/autotest_common.sh@941 -- # uname 00:06:45.259 10:34:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:45.259 10:34:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59953 00:06:45.259 killing process with pid 59953 00:06:45.259 10:34:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:45.259 10:34:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:45.259 10:34:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59953' 00:06:45.259 10:34:15 -- common/autotest_common.sh@955 -- # kill 59953 00:06:45.259 10:34:15 -- common/autotest_common.sh@960 -- # wait 59953 00:06:47.175 00:06:47.175 real 0m3.310s 00:06:47.175 user 0m3.236s 00:06:47.175 sys 0m0.432s 00:06:47.175 10:34:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.175 10:34:17 -- common/autotest_common.sh@10 -- # set +x 00:06:47.175 ************************************ 00:06:47.175 END TEST accel_rpc 00:06:47.176 ************************************ 00:06:47.176 10:34:17 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:47.176 10:34:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.176 10:34:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.176 10:34:17 -- common/autotest_common.sh@10 -- # set +x 00:06:47.176 ************************************ 00:06:47.176 START TEST app_cmdline 00:06:47.176 ************************************ 00:06:47.176 10:34:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:47.176 * Looking for test storage... 00:06:47.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:47.176 10:34:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:47.176 10:34:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:47.176 10:34:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:47.176 10:34:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:47.176 10:34:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:47.176 10:34:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:47.176 10:34:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:47.176 10:34:17 -- scripts/common.sh@335 -- # IFS=.-: 00:06:47.176 10:34:17 -- scripts/common.sh@335 -- # read -ra ver1 00:06:47.176 10:34:17 -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.176 10:34:17 -- scripts/common.sh@336 -- # read -ra ver2 00:06:47.176 10:34:17 -- scripts/common.sh@337 -- # local 'op=<' 00:06:47.176 10:34:17 -- scripts/common.sh@339 -- # ver1_l=2 00:06:47.176 10:34:17 -- scripts/common.sh@340 -- # ver2_l=1 00:06:47.176 10:34:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:47.176 10:34:17 -- scripts/common.sh@343 -- # case "$op" in 00:06:47.176 10:34:17 -- scripts/common.sh@344 -- # : 1 00:06:47.176 10:34:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:47.176 10:34:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.176 10:34:17 -- scripts/common.sh@364 -- # decimal 1 00:06:47.176 10:34:17 -- scripts/common.sh@352 -- # local d=1 00:06:47.176 10:34:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.176 10:34:17 -- scripts/common.sh@354 -- # echo 1 00:06:47.176 10:34:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:47.176 10:34:17 -- scripts/common.sh@365 -- # decimal 2 00:06:47.176 10:34:17 -- scripts/common.sh@352 -- # local d=2 00:06:47.176 10:34:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.176 10:34:17 -- scripts/common.sh@354 -- # echo 2 00:06:47.176 10:34:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:47.176 10:34:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:47.176 10:34:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:47.176 10:34:17 -- scripts/common.sh@367 -- # return 0 00:06:47.176 10:34:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.176 10:34:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:47.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.176 --rc genhtml_branch_coverage=1 00:06:47.176 --rc genhtml_function_coverage=1 00:06:47.176 --rc genhtml_legend=1 00:06:47.176 --rc geninfo_all_blocks=1 00:06:47.176 --rc geninfo_unexecuted_blocks=1 00:06:47.176 00:06:47.176 ' 00:06:47.176 10:34:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:47.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.176 --rc genhtml_branch_coverage=1 00:06:47.176 --rc genhtml_function_coverage=1 00:06:47.176 --rc genhtml_legend=1 00:06:47.176 --rc geninfo_all_blocks=1 00:06:47.176 --rc geninfo_unexecuted_blocks=1 00:06:47.176 00:06:47.176 ' 00:06:47.176 10:34:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:47.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.176 --rc genhtml_branch_coverage=1 00:06:47.176 --rc genhtml_function_coverage=1 00:06:47.176 --rc genhtml_legend=1 00:06:47.176 --rc geninfo_all_blocks=1 00:06:47.176 --rc geninfo_unexecuted_blocks=1 00:06:47.176 00:06:47.176 ' 00:06:47.176 10:34:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:47.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.176 --rc genhtml_branch_coverage=1 00:06:47.176 --rc genhtml_function_coverage=1 00:06:47.176 --rc genhtml_legend=1 00:06:47.176 --rc geninfo_all_blocks=1 00:06:47.176 --rc geninfo_unexecuted_blocks=1 00:06:47.176 00:06:47.176 ' 00:06:47.176 10:34:17 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:47.176 10:34:17 -- app/cmdline.sh@17 -- # spdk_tgt_pid=60071 00:06:47.176 10:34:17 -- app/cmdline.sh@18 -- # waitforlisten 60071 00:06:47.176 10:34:17 -- common/autotest_common.sh@829 -- # '[' -z 60071 ']' 00:06:47.176 10:34:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.176 10:34:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.176 10:34:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.176 10:34:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.176 10:34:17 -- common/autotest_common.sh@10 -- # set +x 00:06:47.176 10:34:17 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:47.176 [2024-12-03 10:34:17.686115] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.176 [2024-12-03 10:34:17.686230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60071 ] 00:06:47.437 [2024-12-03 10:34:17.830848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.437 [2024-12-03 10:34:18.027652] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:47.437 [2024-12-03 10:34:18.027865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.822 10:34:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.822 10:34:19 -- common/autotest_common.sh@862 -- # return 0 00:06:48.822 10:34:19 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:48.822 { 00:06:48.822 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:06:48.822 "fields": { 00:06:48.822 "major": 24, 00:06:48.822 "minor": 1, 00:06:48.822 "patch": 1, 00:06:48.822 "suffix": "-pre", 00:06:48.822 "commit": "c13c99a5e" 00:06:48.822 } 00:06:48.822 } 00:06:48.822 10:34:19 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:48.822 10:34:19 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:48.822 10:34:19 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:48.822 10:34:19 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:48.822 10:34:19 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:48.822 10:34:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.822 10:34:19 -- common/autotest_common.sh@10 -- # set +x 00:06:48.822 10:34:19 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:48.822 10:34:19 -- app/cmdline.sh@26 -- # sort 00:06:48.822 10:34:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.822 10:34:19 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:48.822 10:34:19 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:48.822 10:34:19 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.822 10:34:19 -- common/autotest_common.sh@650 -- # local es=0 00:06:48.822 10:34:19 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.822 10:34:19 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.822 10:34:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.822 10:34:19 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.822 10:34:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.822 10:34:19 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.822 10:34:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.822 10:34:19 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.822 10:34:19 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:48.822 10:34:19 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.083 request: 00:06:49.083 { 00:06:49.083 "method": "env_dpdk_get_mem_stats", 00:06:49.083 "req_id": 1 00:06:49.083 } 00:06:49.083 Got JSON-RPC error response 00:06:49.083 response: 00:06:49.083 { 00:06:49.084 "code": -32601, 00:06:49.084 "message": "Method not found" 00:06:49.084 } 00:06:49.084 10:34:19 -- common/autotest_common.sh@653 -- # es=1 00:06:49.084 10:34:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.084 10:34:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:49.084 10:34:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.084 10:34:19 -- app/cmdline.sh@1 -- # killprocess 60071 00:06:49.084 10:34:19 -- common/autotest_common.sh@936 -- # '[' -z 60071 ']' 00:06:49.084 10:34:19 -- common/autotest_common.sh@940 -- # kill -0 60071 00:06:49.084 10:34:19 -- common/autotest_common.sh@941 -- # uname 00:06:49.084 10:34:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:49.084 10:34:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60071 00:06:49.084 killing process with pid 60071 00:06:49.084 10:34:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:49.084 10:34:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:49.084 10:34:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60071' 00:06:49.084 10:34:19 -- common/autotest_common.sh@955 -- # kill 60071 00:06:49.084 10:34:19 -- common/autotest_common.sh@960 -- # wait 60071 00:06:50.999 00:06:50.999 real 0m3.734s 00:06:50.999 user 0m4.119s 00:06:50.999 sys 0m0.460s 00:06:50.999 10:34:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.999 10:34:21 -- common/autotest_common.sh@10 -- # set +x 00:06:50.999 ************************************ 00:06:50.999 END TEST app_cmdline 00:06:50.999 ************************************ 00:06:50.999 10:34:21 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:50.999 10:34:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:50.999 10:34:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.999 10:34:21 -- common/autotest_common.sh@10 -- # set +x 00:06:50.999 ************************************ 00:06:50.999 START TEST version 00:06:50.999 ************************************ 00:06:50.999 10:34:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:50.999 * Looking for test storage... 00:06:50.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:50.999 10:34:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:51.000 10:34:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:51.000 10:34:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:51.000 10:34:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:51.000 10:34:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:51.000 10:34:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:51.000 10:34:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:51.000 10:34:21 -- scripts/common.sh@335 -- # IFS=.-: 00:06:51.000 10:34:21 -- scripts/common.sh@335 -- # read -ra ver1 00:06:51.000 10:34:21 -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.000 10:34:21 -- scripts/common.sh@336 -- # read -ra ver2 00:06:51.000 10:34:21 -- scripts/common.sh@337 -- # local 'op=<' 00:06:51.000 10:34:21 -- scripts/common.sh@339 -- # ver1_l=2 00:06:51.000 10:34:21 -- scripts/common.sh@340 -- # ver2_l=1 00:06:51.000 10:34:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:51.000 10:34:21 -- scripts/common.sh@343 -- # case "$op" in 00:06:51.000 10:34:21 -- scripts/common.sh@344 -- # : 1 00:06:51.000 10:34:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:51.000 10:34:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.000 10:34:21 -- scripts/common.sh@364 -- # decimal 1 00:06:51.000 10:34:21 -- scripts/common.sh@352 -- # local d=1 00:06:51.000 10:34:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.000 10:34:21 -- scripts/common.sh@354 -- # echo 1 00:06:51.000 10:34:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:51.000 10:34:21 -- scripts/common.sh@365 -- # decimal 2 00:06:51.000 10:34:21 -- scripts/common.sh@352 -- # local d=2 00:06:51.000 10:34:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.000 10:34:21 -- scripts/common.sh@354 -- # echo 2 00:06:51.000 10:34:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:51.000 10:34:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:51.000 10:34:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:51.000 10:34:21 -- scripts/common.sh@367 -- # return 0 00:06:51.000 10:34:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.000 10:34:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:51.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.000 --rc genhtml_branch_coverage=1 00:06:51.000 --rc genhtml_function_coverage=1 00:06:51.000 --rc genhtml_legend=1 00:06:51.000 --rc geninfo_all_blocks=1 00:06:51.000 --rc geninfo_unexecuted_blocks=1 00:06:51.000 00:06:51.000 ' 00:06:51.000 10:34:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:51.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.000 --rc genhtml_branch_coverage=1 00:06:51.000 --rc genhtml_function_coverage=1 00:06:51.000 --rc genhtml_legend=1 00:06:51.000 --rc geninfo_all_blocks=1 00:06:51.000 --rc geninfo_unexecuted_blocks=1 00:06:51.000 00:06:51.000 ' 00:06:51.000 10:34:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:51.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.000 --rc genhtml_branch_coverage=1 00:06:51.000 --rc genhtml_function_coverage=1 00:06:51.000 --rc genhtml_legend=1 00:06:51.000 --rc geninfo_all_blocks=1 00:06:51.000 --rc geninfo_unexecuted_blocks=1 00:06:51.000 00:06:51.000 ' 00:06:51.000 10:34:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:51.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.000 --rc genhtml_branch_coverage=1 00:06:51.000 --rc genhtml_function_coverage=1 00:06:51.000 --rc genhtml_legend=1 00:06:51.000 --rc geninfo_all_blocks=1 00:06:51.000 --rc geninfo_unexecuted_blocks=1 00:06:51.000 00:06:51.000 ' 00:06:51.000 10:34:21 -- app/version.sh@17 -- # get_header_version major 00:06:51.000 10:34:21 -- app/version.sh@14 -- # cut -f2 00:06:51.000 10:34:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.000 10:34:21 -- app/version.sh@14 -- # tr -d '"' 00:06:51.000 10:34:21 -- app/version.sh@17 -- # major=24 00:06:51.000 10:34:21 -- app/version.sh@18 -- # get_header_version minor 00:06:51.000 10:34:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.000 10:34:21 -- app/version.sh@14 -- # cut -f2 00:06:51.000 10:34:21 -- app/version.sh@14 -- # tr -d '"' 00:06:51.000 10:34:21 -- app/version.sh@18 -- # minor=1 00:06:51.000 10:34:21 -- app/version.sh@19 -- # get_header_version patch 00:06:51.000 10:34:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.000 10:34:21 -- app/version.sh@14 -- # cut -f2 00:06:51.000 10:34:21 -- app/version.sh@14 -- # tr -d '"' 00:06:51.000 10:34:21 -- app/version.sh@19 -- # patch=1 00:06:51.000 10:34:21 -- app/version.sh@20 -- # get_header_version suffix 00:06:51.000 10:34:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:51.000 10:34:21 -- app/version.sh@14 -- # cut -f2 00:06:51.000 10:34:21 -- app/version.sh@14 -- # tr -d '"' 00:06:51.000 10:34:21 -- app/version.sh@20 -- # suffix=-pre 00:06:51.000 10:34:21 -- app/version.sh@22 -- # version=24.1 00:06:51.000 10:34:21 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:51.000 10:34:21 -- app/version.sh@25 -- # version=24.1.1 00:06:51.000 10:34:21 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:51.000 10:34:21 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:51.000 10:34:21 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:51.000 10:34:21 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:51.000 10:34:21 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:51.000 ************************************ 00:06:51.000 END TEST version 00:06:51.000 ************************************ 00:06:51.000 00:06:51.000 real 0m0.183s 00:06:51.000 user 0m0.125s 00:06:51.000 sys 0m0.085s 00:06:51.000 10:34:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.000 10:34:21 -- common/autotest_common.sh@10 -- # set +x 00:06:51.000 10:34:21 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:06:51.000 10:34:21 -- spdk/autotest.sh@191 -- # uname -s 00:06:51.000 10:34:21 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:06:51.000 10:34:21 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:06:51.000 10:34:21 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:06:51.000 10:34:21 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:06:51.000 10:34:21 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:51.000 10:34:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:51.000 10:34:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.000 10:34:21 -- common/autotest_common.sh@10 -- # set +x 00:06:51.000 ************************************ 00:06:51.000 START TEST blockdev_nvme 00:06:51.000 ************************************ 00:06:51.000 10:34:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:51.000 * Looking for test storage... 00:06:51.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:51.000 10:34:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:51.000 10:34:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:51.000 10:34:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:51.262 10:34:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:51.262 10:34:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:51.262 10:34:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:51.262 10:34:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:51.262 10:34:21 -- scripts/common.sh@335 -- # IFS=.-: 00:06:51.262 10:34:21 -- scripts/common.sh@335 -- # read -ra ver1 00:06:51.262 10:34:21 -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.262 10:34:21 -- scripts/common.sh@336 -- # read -ra ver2 00:06:51.262 10:34:21 -- scripts/common.sh@337 -- # local 'op=<' 00:06:51.262 10:34:21 -- scripts/common.sh@339 -- # ver1_l=2 00:06:51.262 10:34:21 -- scripts/common.sh@340 -- # ver2_l=1 00:06:51.262 10:34:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:51.262 10:34:21 -- scripts/common.sh@343 -- # case "$op" in 00:06:51.262 10:34:21 -- scripts/common.sh@344 -- # : 1 00:06:51.262 10:34:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:51.262 10:34:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.262 10:34:21 -- scripts/common.sh@364 -- # decimal 1 00:06:51.262 10:34:21 -- scripts/common.sh@352 -- # local d=1 00:06:51.262 10:34:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.262 10:34:21 -- scripts/common.sh@354 -- # echo 1 00:06:51.262 10:34:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:51.262 10:34:21 -- scripts/common.sh@365 -- # decimal 2 00:06:51.262 10:34:21 -- scripts/common.sh@352 -- # local d=2 00:06:51.262 10:34:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.262 10:34:21 -- scripts/common.sh@354 -- # echo 2 00:06:51.262 10:34:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:51.262 10:34:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:51.262 10:34:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:51.262 10:34:21 -- scripts/common.sh@367 -- # return 0 00:06:51.262 10:34:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.262 10:34:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:51.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.262 --rc genhtml_branch_coverage=1 00:06:51.262 --rc genhtml_function_coverage=1 00:06:51.262 --rc genhtml_legend=1 00:06:51.262 --rc geninfo_all_blocks=1 00:06:51.262 --rc geninfo_unexecuted_blocks=1 00:06:51.262 00:06:51.262 ' 00:06:51.262 10:34:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:51.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.262 --rc genhtml_branch_coverage=1 00:06:51.262 --rc genhtml_function_coverage=1 00:06:51.262 --rc genhtml_legend=1 00:06:51.262 --rc geninfo_all_blocks=1 00:06:51.262 --rc geninfo_unexecuted_blocks=1 00:06:51.262 00:06:51.262 ' 00:06:51.262 10:34:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:51.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.262 --rc genhtml_branch_coverage=1 00:06:51.262 --rc genhtml_function_coverage=1 00:06:51.262 --rc genhtml_legend=1 00:06:51.262 --rc geninfo_all_blocks=1 00:06:51.262 --rc geninfo_unexecuted_blocks=1 00:06:51.262 00:06:51.262 ' 00:06:51.262 10:34:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:51.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.262 --rc genhtml_branch_coverage=1 00:06:51.262 --rc genhtml_function_coverage=1 00:06:51.262 --rc genhtml_legend=1 00:06:51.262 --rc geninfo_all_blocks=1 00:06:51.262 --rc geninfo_unexecuted_blocks=1 00:06:51.262 00:06:51.262 ' 00:06:51.262 10:34:21 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:51.262 10:34:21 -- bdev/nbd_common.sh@6 -- # set -e 00:06:51.262 10:34:21 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:51.262 10:34:21 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:51.262 10:34:21 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:51.262 10:34:21 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:51.262 10:34:21 -- bdev/blockdev.sh@18 -- # : 00:06:51.262 10:34:21 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:06:51.262 10:34:21 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:06:51.262 10:34:21 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:06:51.262 10:34:21 -- bdev/blockdev.sh@672 -- # uname -s 00:06:51.262 10:34:21 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:06:51.262 10:34:21 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:06:51.262 10:34:21 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:06:51.262 10:34:21 -- bdev/blockdev.sh@681 -- # crypto_device= 00:06:51.262 10:34:21 -- bdev/blockdev.sh@682 -- # dek= 00:06:51.262 10:34:21 -- bdev/blockdev.sh@683 -- # env_ctx= 00:06:51.262 10:34:21 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:06:51.262 10:34:21 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:06:51.262 10:34:21 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:06:51.262 10:34:21 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:06:51.262 10:34:21 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:06:51.262 10:34:21 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60254 00:06:51.262 10:34:21 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:51.262 10:34:21 -- bdev/blockdev.sh@47 -- # waitforlisten 60254 00:06:51.262 10:34:21 -- common/autotest_common.sh@829 -- # '[' -z 60254 ']' 00:06:51.262 10:34:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.262 10:34:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.262 10:34:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.262 10:34:21 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:51.262 10:34:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.262 10:34:21 -- common/autotest_common.sh@10 -- # set +x 00:06:51.262 [2024-12-03 10:34:21.709766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.262 [2024-12-03 10:34:21.709875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60254 ] 00:06:51.262 [2024-12-03 10:34:21.858636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.524 [2024-12-03 10:34:22.061422] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.525 [2024-12-03 10:34:22.061641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.912 10:34:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.912 10:34:23 -- common/autotest_common.sh@862 -- # return 0 00:06:52.912 10:34:23 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:06:52.912 10:34:23 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:06:52.912 10:34:23 -- bdev/blockdev.sh@79 -- # local json 00:06:52.912 10:34:23 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:06:52.912 10:34:23 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:52.912 10:34:23 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:06:52.912 10:34:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.912 10:34:23 -- common/autotest_common.sh@10 -- # set +x 00:06:52.912 10:34:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.912 10:34:23 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:06:52.912 10:34:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.912 10:34:23 -- common/autotest_common.sh@10 -- # set +x 00:06:52.912 10:34:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.912 10:34:23 -- bdev/blockdev.sh@738 -- # cat 00:06:52.912 10:34:23 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:06:52.913 10:34:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.913 10:34:23 -- common/autotest_common.sh@10 -- # set +x 00:06:53.176 10:34:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.176 10:34:23 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:06:53.176 10:34:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.176 10:34:23 -- common/autotest_common.sh@10 -- # set +x 00:06:53.176 10:34:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.176 10:34:23 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:53.176 10:34:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.176 10:34:23 -- common/autotest_common.sh@10 -- # set +x 00:06:53.176 10:34:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.176 10:34:23 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:06:53.176 10:34:23 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:06:53.176 10:34:23 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:06:53.176 10:34:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.176 10:34:23 -- common/autotest_common.sh@10 -- # set +x 00:06:53.176 10:34:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.176 10:34:23 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:06:53.176 10:34:23 -- bdev/blockdev.sh@747 -- # jq -r .name 00:06:53.176 10:34:23 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "10248417-1d2c-4423-b525-af5bbc83f4b0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "10248417-1d2c-4423-b525-af5bbc83f4b0",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "3dbfebf4-5a67-491f-84c7-e6ddf3b210d3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3dbfebf4-5a67-491f-84c7-e6ddf3b210d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f55d6a37-beb5-4531-b0e7-ae606bccf320"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f55d6a37-beb5-4531-b0e7-ae606bccf320",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e5adfad2-0112-45b0-bde2-6ed9f794dfdc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e5adfad2-0112-45b0-bde2-6ed9f794dfdc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e5928ffc-c353-4c7e-adfb-2186ecd7599d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e5928ffc-c353-4c7e-adfb-2186ecd7599d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "855914ed-7c52-491b-8ac6-2f092bc8eb98"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "855914ed-7c52-491b-8ac6-2f092bc8eb98",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:53.176 10:34:23 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:06:53.176 10:34:23 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:06:53.176 10:34:23 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:06:53.176 10:34:23 -- bdev/blockdev.sh@752 -- # killprocess 60254 00:06:53.176 10:34:23 -- common/autotest_common.sh@936 -- # '[' -z 60254 ']' 00:06:53.176 10:34:23 -- common/autotest_common.sh@940 -- # kill -0 60254 00:06:53.176 10:34:23 -- common/autotest_common.sh@941 -- # uname 00:06:53.176 10:34:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:53.176 10:34:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60254 00:06:53.176 10:34:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:53.176 10:34:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:53.176 killing process with pid 60254 00:06:53.176 10:34:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60254' 00:06:53.176 10:34:23 -- common/autotest_common.sh@955 -- # kill 60254 00:06:53.176 10:34:23 -- common/autotest_common.sh@960 -- # wait 60254 00:06:55.099 10:34:25 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:55.099 10:34:25 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:55.099 10:34:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:55.099 10:34:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.099 10:34:25 -- common/autotest_common.sh@10 -- # set +x 00:06:55.099 ************************************ 00:06:55.099 START TEST bdev_hello_world 00:06:55.099 ************************************ 00:06:55.099 10:34:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:55.099 [2024-12-03 10:34:25.318011] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.099 [2024-12-03 10:34:25.318130] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60340 ] 00:06:55.099 [2024-12-03 10:34:25.467589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.099 [2024-12-03 10:34:25.669732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.673 [2024-12-03 10:34:26.214324] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:55.673 [2024-12-03 10:34:26.214372] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:55.673 [2024-12-03 10:34:26.214391] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:55.673 [2024-12-03 10:34:26.216912] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:55.673 [2024-12-03 10:34:26.217369] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:55.673 [2024-12-03 10:34:26.217396] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:55.673 [2024-12-03 10:34:26.217654] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:55.673 00:06:55.673 [2024-12-03 10:34:26.217680] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:56.619 00:06:56.619 real 0m1.803s 00:06:56.619 user 0m1.510s 00:06:56.619 sys 0m0.185s 00:06:56.619 10:34:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.619 10:34:27 -- common/autotest_common.sh@10 -- # set +x 00:06:56.619 ************************************ 00:06:56.619 END TEST bdev_hello_world 00:06:56.619 ************************************ 00:06:56.619 10:34:27 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:06:56.619 10:34:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:56.619 10:34:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.619 10:34:27 -- common/autotest_common.sh@10 -- # set +x 00:06:56.619 ************************************ 00:06:56.619 START TEST bdev_bounds 00:06:56.619 ************************************ 00:06:56.619 10:34:27 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:06:56.619 10:34:27 -- bdev/blockdev.sh@288 -- # bdevio_pid=60382 00:06:56.619 10:34:27 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.619 Process bdevio pid: 60382 00:06:56.619 10:34:27 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 60382' 00:06:56.619 10:34:27 -- bdev/blockdev.sh@291 -- # waitforlisten 60382 00:06:56.619 10:34:27 -- common/autotest_common.sh@829 -- # '[' -z 60382 ']' 00:06:56.619 10:34:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.619 10:34:27 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:56.619 10:34:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.619 10:34:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.619 10:34:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.619 10:34:27 -- common/autotest_common.sh@10 -- # set +x 00:06:56.619 [2024-12-03 10:34:27.162283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.619 [2024-12-03 10:34:27.162391] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60382 ] 00:06:56.881 [2024-12-03 10:34:27.304464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.142 [2024-12-03 10:34:27.505282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.142 [2024-12-03 10:34:27.505730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.142 [2024-12-03 10:34:27.506024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.087 10:34:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.087 10:34:28 -- common/autotest_common.sh@862 -- # return 0 00:06:58.087 10:34:28 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:58.350 I/O targets: 00:06:58.350 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:58.350 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:58.350 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.350 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.350 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.350 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:58.350 00:06:58.350 00:06:58.350 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.350 http://cunit.sourceforge.net/ 00:06:58.350 00:06:58.350 00:06:58.350 Suite: bdevio tests on: Nvme3n1 00:06:58.350 Test: blockdev write read block ...passed 00:06:58.350 Test: blockdev write zeroes read block ...passed 00:06:58.350 Test: blockdev write zeroes read no split ...passed 00:06:58.350 Test: blockdev write zeroes read split ...passed 00:06:58.350 Test: blockdev write zeroes read split partial ...passed 00:06:58.350 Test: blockdev reset ...[2024-12-03 10:34:28.801389] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:06:58.350 [2024-12-03 10:34:28.804302] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:58.350 passed 00:06:58.350 Test: blockdev write read 8 blocks ...passed 00:06:58.350 Test: blockdev write read size > 128k ...passed 00:06:58.350 Test: blockdev write read invalid size ...passed 00:06:58.350 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.350 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.350 Test: blockdev write read max offset ...passed 00:06:58.350 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.350 Test: blockdev writev readv 8 blocks ...passed 00:06:58.350 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.350 Test: blockdev writev readv block ...passed 00:06:58.350 Test: blockdev writev readv size > 128k ...passed 00:06:58.350 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.350 Test: blockdev comparev and writev ...[2024-12-03 10:34:28.810669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x276e0e000 len:0x1000 00:06:58.350 [2024-12-03 10:34:28.810724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.350 passed 00:06:58.350 Test: blockdev nvme passthru rw ...passed 00:06:58.350 Test: blockdev nvme passthru vendor specific ...[2024-12-03 10:34:28.811422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.350 [2024-12-03 10:34:28.811449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.350 passed 00:06:58.350 Test: blockdev nvme admin passthru ...passed 00:06:58.350 Test: blockdev copy ...passed 00:06:58.350 Suite: bdevio tests on: Nvme2n3 00:06:58.350 Test: blockdev write read block ...passed 00:06:58.350 Test: blockdev write zeroes read block ...passed 00:06:58.350 Test: blockdev write zeroes read no split ...passed 00:06:58.350 Test: blockdev write zeroes read split ...passed 00:06:58.350 Test: blockdev write zeroes read split partial ...passed 00:06:58.350 Test: blockdev reset ...[2024-12-03 10:34:28.853755] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:06:58.350 [2024-12-03 10:34:28.856760] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:58.350 passed 00:06:58.350 Test: blockdev write read 8 blocks ...passed 00:06:58.350 Test: blockdev write read size > 128k ...passed 00:06:58.350 Test: blockdev write read invalid size ...passed 00:06:58.350 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.350 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.350 Test: blockdev write read max offset ...passed 00:06:58.350 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.350 Test: blockdev writev readv 8 blocks ...passed 00:06:58.350 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.350 Test: blockdev writev readv block ...passed 00:06:58.350 Test: blockdev writev readv size > 128k ...passed 00:06:58.350 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.350 Test: blockdev comparev and writev ...[2024-12-03 10:34:28.863181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x276e0a000 len:0x1000 00:06:58.350 passed 00:06:58.350 Test: blockdev nvme passthru rw ...[2024-12-03 10:34:28.863255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.350 passed 00:06:58.350 Test: blockdev nvme passthru vendor specific ...[2024-12-03 10:34:28.863781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.350 passed 00:06:58.350 Test: blockdev nvme admin passthru ...[2024-12-03 10:34:28.863812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.351 passed 00:06:58.351 Test: blockdev copy ...passed 00:06:58.351 Suite: bdevio tests on: Nvme2n2 00:06:58.351 Test: blockdev write read block ...passed 00:06:58.351 Test: blockdev write zeroes read block ...passed 00:06:58.351 Test: blockdev write zeroes read no split ...passed 00:06:58.351 Test: blockdev write zeroes read split ...passed 00:06:58.351 Test: blockdev write zeroes read split partial ...passed 00:06:58.351 Test: blockdev reset ...[2024-12-03 10:34:28.907106] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:06:58.351 [2024-12-03 10:34:28.910043] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:58.351 passed 00:06:58.351 Test: blockdev write read 8 blocks ...passed 00:06:58.351 Test: blockdev write read size > 128k ...passed 00:06:58.351 Test: blockdev write read invalid size ...passed 00:06:58.351 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.351 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.351 Test: blockdev write read max offset ...passed 00:06:58.351 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.351 Test: blockdev writev readv 8 blocks ...passed 00:06:58.351 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.351 Test: blockdev writev readv block ...passed 00:06:58.351 Test: blockdev writev readv size > 128k ...passed 00:06:58.351 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.351 Test: blockdev comparev and writev ...[2024-12-03 10:34:28.915723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x271806000 len:0x1000 00:06:58.351 [2024-12-03 10:34:28.915766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.351 passed 00:06:58.351 Test: blockdev nvme passthru rw ...passed 00:06:58.351 Test: blockdev nvme passthru vendor specific ...[2024-12-03 10:34:28.916398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.351 [2024-12-03 10:34:28.916422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.351 passed 00:06:58.351 Test: blockdev nvme admin passthru ...passed 00:06:58.351 Test: blockdev copy ...passed 00:06:58.351 Suite: bdevio tests on: Nvme2n1 00:06:58.351 Test: blockdev write read block ...passed 00:06:58.351 Test: blockdev write zeroes read block ...passed 00:06:58.351 Test: blockdev write zeroes read no split ...passed 00:06:58.351 Test: blockdev write zeroes read split ...passed 00:06:58.613 Test: blockdev write zeroes read split partial ...passed 00:06:58.613 Test: blockdev reset ...[2024-12-03 10:34:28.961413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:06:58.613 [2024-12-03 10:34:28.964309] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:58.613 passed 00:06:58.613 Test: blockdev write read 8 blocks ...passed 00:06:58.613 Test: blockdev write read size > 128k ...passed 00:06:58.613 Test: blockdev write read invalid size ...passed 00:06:58.613 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.613 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.613 Test: blockdev write read max offset ...passed 00:06:58.613 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.613 Test: blockdev writev readv 8 blocks ...passed 00:06:58.613 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.613 Test: blockdev writev readv block ...passed 00:06:58.613 Test: blockdev writev readv size > 128k ...passed 00:06:58.613 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.613 Test: blockdev comparev and writev ...[2024-12-03 10:34:28.970448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x271801000 len:0x1000 00:06:58.613 [2024-12-03 10:34:28.970487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.613 passed 00:06:58.613 Test: blockdev nvme passthru rw ...passed 00:06:58.613 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.613 Test: blockdev nvme admin passthru ...[2024-12-03 10:34:28.971190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.613 [2024-12-03 10:34:28.971230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.613 passed 00:06:58.613 Test: blockdev copy ...passed 00:06:58.613 Suite: bdevio tests on: Nvme1n1 00:06:58.613 Test: blockdev write read block ...passed 00:06:58.613 Test: blockdev write zeroes read block ...passed 00:06:58.613 Test: blockdev write zeroes read no split ...passed 00:06:58.613 Test: blockdev write zeroes read split ...passed 00:06:58.613 Test: blockdev write zeroes read split partial ...passed 00:06:58.613 Test: blockdev reset ...[2024-12-03 10:34:29.012998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:06:58.613 [2024-12-03 10:34:29.015777] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:58.613 passed 00:06:58.613 Test: blockdev write read 8 blocks ...passed 00:06:58.613 Test: blockdev write read size > 128k ...passed 00:06:58.613 Test: blockdev write read invalid size ...passed 00:06:58.613 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.613 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.613 Test: blockdev write read max offset ...passed 00:06:58.613 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.613 Test: blockdev writev readv 8 blocks ...passed 00:06:58.613 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.613 Test: blockdev writev readv block ...passed 00:06:58.613 Test: blockdev writev readv size > 128k ...passed 00:06:58.613 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.613 Test: blockdev comparev and writev ...[2024-12-03 10:34:29.021833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27de06000 len:0x1000 00:06:58.613 [2024-12-03 10:34:29.021871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.613 passed 00:06:58.613 Test: blockdev nvme passthru rw ...passed 00:06:58.613 Test: blockdev nvme passthru vendor specific ...[2024-12-03 10:34:29.022406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.613 passed 00:06:58.613 Test: blockdev nvme admin passthru ...[2024-12-03 10:34:29.022434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.613 passed 00:06:58.613 Test: blockdev copy ...passed 00:06:58.613 Suite: bdevio tests on: Nvme0n1 00:06:58.613 Test: blockdev write read block ...passed 00:06:58.613 Test: blockdev write zeroes read block ...passed 00:06:58.613 Test: blockdev write zeroes read no split ...passed 00:06:58.613 Test: blockdev write zeroes read split ...passed 00:06:58.613 Test: blockdev write zeroes read split partial ...passed 00:06:58.613 Test: blockdev reset ...[2024-12-03 10:34:29.066258] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:06:58.613 [2024-12-03 10:34:29.068829] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:58.613 passed 00:06:58.613 Test: blockdev write read 8 blocks ...passed 00:06:58.613 Test: blockdev write read size > 128k ...passed 00:06:58.613 Test: blockdev write read invalid size ...passed 00:06:58.613 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.613 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.613 Test: blockdev write read max offset ...passed 00:06:58.613 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.613 Test: blockdev writev readv 8 blocks ...passed 00:06:58.613 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.613 Test: blockdev writev readv block ...passed 00:06:58.613 Test: blockdev writev readv size > 128k ...passed 00:06:58.613 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.613 Test: blockdev comparev and writev ...[2024-12-03 10:34:29.073857] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:58.613 separate metadata which is not supported yet. 00:06:58.613 passed 00:06:58.613 Test: blockdev nvme passthru rw ...passed 00:06:58.613 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.613 Test: blockdev nvme admin passthru ...[2024-12-03 10:34:29.074239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:58.613 [2024-12-03 10:34:29.074269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:58.613 passed 00:06:58.613 Test: blockdev copy ...passed 00:06:58.613 00:06:58.613 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.613 suites 6 6 n/a 0 0 00:06:58.613 tests 138 138 138 0 0 00:06:58.613 asserts 893 893 893 0 n/a 00:06:58.613 00:06:58.613 Elapsed time = 0.874 seconds 00:06:58.613 0 00:06:58.613 10:34:29 -- bdev/blockdev.sh@293 -- # killprocess 60382 00:06:58.613 10:34:29 -- common/autotest_common.sh@936 -- # '[' -z 60382 ']' 00:06:58.613 10:34:29 -- common/autotest_common.sh@940 -- # kill -0 60382 00:06:58.613 10:34:29 -- common/autotest_common.sh@941 -- # uname 00:06:58.613 10:34:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.613 10:34:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60382 00:06:58.613 10:34:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:58.613 10:34:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:58.613 killing process with pid 60382 00:06:58.613 10:34:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60382' 00:06:58.613 10:34:29 -- common/autotest_common.sh@955 -- # kill 60382 00:06:58.613 10:34:29 -- common/autotest_common.sh@960 -- # wait 60382 00:06:59.558 10:34:29 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:06:59.558 00:06:59.558 real 0m2.743s 00:06:59.558 user 0m7.186s 00:06:59.558 sys 0m0.285s 00:06:59.558 10:34:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.558 ************************************ 00:06:59.558 END TEST bdev_bounds 00:06:59.558 ************************************ 00:06:59.558 10:34:29 -- common/autotest_common.sh@10 -- # set +x 00:06:59.558 10:34:29 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:59.558 10:34:29 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:06:59.558 10:34:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.558 10:34:29 -- common/autotest_common.sh@10 -- # set +x 00:06:59.558 ************************************ 00:06:59.558 START TEST bdev_nbd 00:06:59.558 ************************************ 00:06:59.558 10:34:29 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:59.558 10:34:29 -- bdev/blockdev.sh@298 -- # uname -s 00:06:59.558 10:34:29 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:06:59.558 10:34:29 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.558 10:34:29 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:59.558 10:34:29 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:59.558 10:34:29 -- bdev/blockdev.sh@302 -- # local bdev_all 00:06:59.558 10:34:29 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:06:59.558 10:34:29 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:06:59.558 10:34:29 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:59.558 10:34:29 -- bdev/blockdev.sh@309 -- # local nbd_all 00:06:59.558 10:34:29 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:06:59.558 10:34:29 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:59.558 10:34:29 -- bdev/blockdev.sh@312 -- # local nbd_list 00:06:59.558 10:34:29 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:59.558 10:34:29 -- bdev/blockdev.sh@313 -- # local bdev_list 00:06:59.558 10:34:29 -- bdev/blockdev.sh@316 -- # nbd_pid=60449 00:06:59.558 10:34:29 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:59.558 10:34:29 -- bdev/blockdev.sh@318 -- # waitforlisten 60449 /var/tmp/spdk-nbd.sock 00:06:59.558 10:34:29 -- common/autotest_common.sh@829 -- # '[' -z 60449 ']' 00:06:59.558 10:34:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.558 10:34:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.558 10:34:29 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:59.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.558 10:34:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.558 10:34:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.558 10:34:29 -- common/autotest_common.sh@10 -- # set +x 00:06:59.558 [2024-12-03 10:34:29.947294] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.558 [2024-12-03 10:34:29.947402] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.558 [2024-12-03 10:34:30.098616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.819 [2024-12-03 10:34:30.306389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.206 10:34:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.207 10:34:31 -- common/autotest_common.sh@862 -- # return 0 00:07:01.207 10:34:31 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@24 -- # local i 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:01.207 10:34:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:01.207 10:34:31 -- common/autotest_common.sh@867 -- # local i 00:07:01.207 10:34:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:01.207 10:34:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:01.207 10:34:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:01.207 10:34:31 -- common/autotest_common.sh@871 -- # break 00:07:01.207 10:34:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:01.207 10:34:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:01.207 10:34:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.207 1+0 records in 00:07:01.207 1+0 records out 00:07:01.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301389 s, 13.6 MB/s 00:07:01.207 10:34:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.207 10:34:31 -- common/autotest_common.sh@884 -- # size=4096 00:07:01.207 10:34:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.207 10:34:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:01.207 10:34:31 -- common/autotest_common.sh@887 -- # return 0 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.207 10:34:31 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:01.468 10:34:31 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:01.468 10:34:31 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:01.468 10:34:31 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:01.468 10:34:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:01.468 10:34:31 -- common/autotest_common.sh@867 -- # local i 00:07:01.468 10:34:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:01.468 10:34:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:01.468 10:34:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:01.468 10:34:31 -- common/autotest_common.sh@871 -- # break 00:07:01.468 10:34:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:01.468 10:34:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:01.468 10:34:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.468 1+0 records in 00:07:01.468 1+0 records out 00:07:01.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000781272 s, 5.2 MB/s 00:07:01.468 10:34:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.468 10:34:31 -- common/autotest_common.sh@884 -- # size=4096 00:07:01.468 10:34:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.468 10:34:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:01.468 10:34:31 -- common/autotest_common.sh@887 -- # return 0 00:07:01.468 10:34:31 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.468 10:34:31 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.468 10:34:31 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:01.730 10:34:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:01.730 10:34:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:01.730 10:34:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:01.730 10:34:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:01.730 10:34:32 -- common/autotest_common.sh@867 -- # local i 00:07:01.730 10:34:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:01.730 10:34:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:01.730 10:34:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:01.730 10:34:32 -- common/autotest_common.sh@871 -- # break 00:07:01.730 10:34:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:01.730 10:34:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:01.730 10:34:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.730 1+0 records in 00:07:01.730 1+0 records out 00:07:01.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423239 s, 9.7 MB/s 00:07:01.730 10:34:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.730 10:34:32 -- common/autotest_common.sh@884 -- # size=4096 00:07:01.730 10:34:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.730 10:34:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:01.730 10:34:32 -- common/autotest_common.sh@887 -- # return 0 00:07:01.730 10:34:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.730 10:34:32 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.730 10:34:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:01.730 10:34:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:01.730 10:34:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:01.730 10:34:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:01.730 10:34:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:01.730 10:34:32 -- common/autotest_common.sh@867 -- # local i 00:07:01.730 10:34:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:01.730 10:34:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:01.730 10:34:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:01.730 10:34:32 -- common/autotest_common.sh@871 -- # break 00:07:01.730 10:34:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:01.730 10:34:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:01.730 10:34:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.730 1+0 records in 00:07:01.730 1+0 records out 00:07:01.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553268 s, 7.4 MB/s 00:07:01.730 10:34:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.730 10:34:32 -- common/autotest_common.sh@884 -- # size=4096 00:07:01.730 10:34:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.730 10:34:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:01.730 10:34:32 -- common/autotest_common.sh@887 -- # return 0 00:07:01.730 10:34:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.730 10:34:32 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.730 10:34:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:01.991 10:34:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:01.991 10:34:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:01.991 10:34:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:01.991 10:34:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:01.991 10:34:32 -- common/autotest_common.sh@867 -- # local i 00:07:01.991 10:34:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:01.991 10:34:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:01.991 10:34:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:01.991 10:34:32 -- common/autotest_common.sh@871 -- # break 00:07:01.991 10:34:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:01.991 10:34:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:01.991 10:34:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.991 1+0 records in 00:07:01.991 1+0 records out 00:07:01.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789532 s, 5.2 MB/s 00:07:01.991 10:34:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.991 10:34:32 -- common/autotest_common.sh@884 -- # size=4096 00:07:01.992 10:34:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.992 10:34:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:01.992 10:34:32 -- common/autotest_common.sh@887 -- # return 0 00:07:01.992 10:34:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.992 10:34:32 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.992 10:34:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:02.253 10:34:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:02.253 10:34:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:02.253 10:34:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:02.253 10:34:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:02.253 10:34:32 -- common/autotest_common.sh@867 -- # local i 00:07:02.253 10:34:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:02.253 10:34:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:02.253 10:34:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:02.253 10:34:32 -- common/autotest_common.sh@871 -- # break 00:07:02.253 10:34:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:02.253 10:34:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:02.254 10:34:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.254 1+0 records in 00:07:02.254 1+0 records out 00:07:02.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000751566 s, 5.4 MB/s 00:07:02.254 10:34:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.254 10:34:32 -- common/autotest_common.sh@884 -- # size=4096 00:07:02.254 10:34:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.254 10:34:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:02.254 10:34:32 -- common/autotest_common.sh@887 -- # return 0 00:07:02.254 10:34:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.254 10:34:32 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.254 10:34:32 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.516 10:34:32 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:02.516 { 00:07:02.516 "nbd_device": "/dev/nbd0", 00:07:02.516 "bdev_name": "Nvme0n1" 00:07:02.516 }, 00:07:02.516 { 00:07:02.516 "nbd_device": "/dev/nbd1", 00:07:02.516 "bdev_name": "Nvme1n1" 00:07:02.516 }, 00:07:02.516 { 00:07:02.516 "nbd_device": "/dev/nbd2", 00:07:02.516 "bdev_name": "Nvme2n1" 00:07:02.516 }, 00:07:02.516 { 00:07:02.516 "nbd_device": "/dev/nbd3", 00:07:02.516 "bdev_name": "Nvme2n2" 00:07:02.516 }, 00:07:02.516 { 00:07:02.516 "nbd_device": "/dev/nbd4", 00:07:02.516 "bdev_name": "Nvme2n3" 00:07:02.516 }, 00:07:02.516 { 00:07:02.516 "nbd_device": "/dev/nbd5", 00:07:02.516 "bdev_name": "Nvme3n1" 00:07:02.516 } 00:07:02.516 ]' 00:07:02.516 10:34:32 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:02.516 10:34:32 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:02.516 10:34:32 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:02.516 { 00:07:02.516 "nbd_device": "/dev/nbd0", 00:07:02.516 "bdev_name": "Nvme0n1" 00:07:02.516 }, 00:07:02.516 { 00:07:02.516 "nbd_device": "/dev/nbd1", 00:07:02.516 "bdev_name": "Nvme1n1" 00:07:02.516 }, 00:07:02.516 { 00:07:02.516 "nbd_device": "/dev/nbd2", 00:07:02.516 "bdev_name": "Nvme2n1" 00:07:02.516 }, 00:07:02.516 { 00:07:02.516 "nbd_device": "/dev/nbd3", 00:07:02.516 "bdev_name": "Nvme2n2" 00:07:02.516 }, 00:07:02.516 { 00:07:02.516 "nbd_device": "/dev/nbd4", 00:07:02.516 "bdev_name": "Nvme2n3" 00:07:02.516 }, 00:07:02.516 { 00:07:02.516 "nbd_device": "/dev/nbd5", 00:07:02.516 "bdev_name": "Nvme3n1" 00:07:02.516 } 00:07:02.516 ]' 00:07:02.516 10:34:32 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:02.516 10:34:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.516 10:34:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:02.516 10:34:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.516 10:34:32 -- bdev/nbd_common.sh@51 -- # local i 00:07:02.516 10:34:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.516 10:34:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.778 10:34:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.778 10:34:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.778 10:34:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.778 10:34:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.778 10:34:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.778 10:34:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.778 10:34:33 -- bdev/nbd_common.sh@41 -- # break 00:07:02.778 10:34:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.778 10:34:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.778 10:34:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.778 10:34:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@41 -- # break 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@41 -- # break 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.040 10:34:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:03.302 10:34:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:03.302 10:34:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:03.302 10:34:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:03.302 10:34:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.302 10:34:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.302 10:34:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:03.302 10:34:33 -- bdev/nbd_common.sh@41 -- # break 00:07:03.302 10:34:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.302 10:34:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.302 10:34:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:03.562 10:34:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:03.562 10:34:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:03.562 10:34:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:03.562 10:34:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.562 10:34:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.562 10:34:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:03.562 10:34:33 -- bdev/nbd_common.sh@41 -- # break 00:07:03.562 10:34:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.563 10:34:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.563 10:34:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:03.563 10:34:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:03.563 10:34:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:03.563 10:34:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:03.563 10:34:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.563 10:34:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.563 10:34:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@41 -- # break 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@65 -- # true 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@122 -- # count=0 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@127 -- # return 0 00:07:03.823 10:34:34 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@12 -- # local i 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:03.823 10:34:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:04.083 /dev/nbd0 00:07:04.083 10:34:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.083 10:34:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.083 10:34:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:04.083 10:34:34 -- common/autotest_common.sh@867 -- # local i 00:07:04.083 10:34:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:04.083 10:34:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:04.083 10:34:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:04.084 10:34:34 -- common/autotest_common.sh@871 -- # break 00:07:04.084 10:34:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:04.084 10:34:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:04.084 10:34:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.084 1+0 records in 00:07:04.084 1+0 records out 00:07:04.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402853 s, 10.2 MB/s 00:07:04.084 10:34:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.084 10:34:34 -- common/autotest_common.sh@884 -- # size=4096 00:07:04.084 10:34:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.084 10:34:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:04.084 10:34:34 -- common/autotest_common.sh@887 -- # return 0 00:07:04.084 10:34:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.084 10:34:34 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:04.084 10:34:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:04.345 /dev/nbd1 00:07:04.345 10:34:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.345 10:34:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.345 10:34:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:04.345 10:34:34 -- common/autotest_common.sh@867 -- # local i 00:07:04.345 10:34:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:04.345 10:34:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:04.345 10:34:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:04.345 10:34:34 -- common/autotest_common.sh@871 -- # break 00:07:04.345 10:34:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:04.345 10:34:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:04.345 10:34:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.345 1+0 records in 00:07:04.346 1+0 records out 00:07:04.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428394 s, 9.6 MB/s 00:07:04.346 10:34:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.346 10:34:34 -- common/autotest_common.sh@884 -- # size=4096 00:07:04.346 10:34:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.346 10:34:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:04.346 10:34:34 -- common/autotest_common.sh@887 -- # return 0 00:07:04.346 10:34:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.346 10:34:34 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:04.346 10:34:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:04.607 /dev/nbd10 00:07:04.607 10:34:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:04.607 10:34:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:04.607 10:34:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:04.607 10:34:35 -- common/autotest_common.sh@867 -- # local i 00:07:04.607 10:34:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:04.607 10:34:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:04.607 10:34:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:04.607 10:34:35 -- common/autotest_common.sh@871 -- # break 00:07:04.607 10:34:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:04.607 10:34:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:04.607 10:34:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.607 1+0 records in 00:07:04.607 1+0 records out 00:07:04.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000845066 s, 4.8 MB/s 00:07:04.607 10:34:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.607 10:34:35 -- common/autotest_common.sh@884 -- # size=4096 00:07:04.607 10:34:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.607 10:34:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:04.607 10:34:35 -- common/autotest_common.sh@887 -- # return 0 00:07:04.607 10:34:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.607 10:34:35 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:04.607 10:34:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:04.868 /dev/nbd11 00:07:04.868 10:34:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:04.868 10:34:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:04.868 10:34:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:04.868 10:34:35 -- common/autotest_common.sh@867 -- # local i 00:07:04.868 10:34:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:04.868 10:34:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:04.868 10:34:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:04.868 10:34:35 -- common/autotest_common.sh@871 -- # break 00:07:04.868 10:34:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:04.868 10:34:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:04.868 10:34:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.868 1+0 records in 00:07:04.868 1+0 records out 00:07:04.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472585 s, 8.7 MB/s 00:07:04.868 10:34:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.868 10:34:35 -- common/autotest_common.sh@884 -- # size=4096 00:07:04.868 10:34:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.868 10:34:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:04.868 10:34:35 -- common/autotest_common.sh@887 -- # return 0 00:07:04.868 10:34:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.868 10:34:35 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:04.868 10:34:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:04.868 /dev/nbd12 00:07:04.868 10:34:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:04.868 10:34:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:04.868 10:34:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:04.868 10:34:35 -- common/autotest_common.sh@867 -- # local i 00:07:04.869 10:34:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:04.869 10:34:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:04.869 10:34:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:04.869 10:34:35 -- common/autotest_common.sh@871 -- # break 00:07:04.869 10:34:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:04.869 10:34:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:04.869 10:34:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.869 1+0 records in 00:07:04.869 1+0 records out 00:07:04.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500921 s, 8.2 MB/s 00:07:04.869 10:34:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.869 10:34:35 -- common/autotest_common.sh@884 -- # size=4096 00:07:04.869 10:34:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.869 10:34:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:04.869 10:34:35 -- common/autotest_common.sh@887 -- # return 0 00:07:04.869 10:34:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.869 10:34:35 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:04.869 10:34:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:05.130 /dev/nbd13 00:07:05.130 10:34:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:05.130 10:34:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:05.130 10:34:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:05.130 10:34:35 -- common/autotest_common.sh@867 -- # local i 00:07:05.130 10:34:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:05.130 10:34:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:05.130 10:34:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:05.130 10:34:35 -- common/autotest_common.sh@871 -- # break 00:07:05.130 10:34:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:05.130 10:34:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:05.130 10:34:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.130 1+0 records in 00:07:05.130 1+0 records out 00:07:05.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123318 s, 3.3 MB/s 00:07:05.130 10:34:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.130 10:34:35 -- common/autotest_common.sh@884 -- # size=4096 00:07:05.130 10:34:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.130 10:34:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:05.130 10:34:35 -- common/autotest_common.sh@887 -- # return 0 00:07:05.130 10:34:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.130 10:34:35 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.130 10:34:35 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.130 10:34:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.130 10:34:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.391 10:34:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.391 { 00:07:05.391 "nbd_device": "/dev/nbd0", 00:07:05.391 "bdev_name": "Nvme0n1" 00:07:05.391 }, 00:07:05.391 { 00:07:05.391 "nbd_device": "/dev/nbd1", 00:07:05.391 "bdev_name": "Nvme1n1" 00:07:05.391 }, 00:07:05.391 { 00:07:05.391 "nbd_device": "/dev/nbd10", 00:07:05.391 "bdev_name": "Nvme2n1" 00:07:05.391 }, 00:07:05.391 { 00:07:05.391 "nbd_device": "/dev/nbd11", 00:07:05.391 "bdev_name": "Nvme2n2" 00:07:05.391 }, 00:07:05.391 { 00:07:05.391 "nbd_device": "/dev/nbd12", 00:07:05.391 "bdev_name": "Nvme2n3" 00:07:05.391 }, 00:07:05.391 { 00:07:05.391 "nbd_device": "/dev/nbd13", 00:07:05.391 "bdev_name": "Nvme3n1" 00:07:05.391 } 00:07:05.391 ]' 00:07:05.391 10:34:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.392 { 00:07:05.392 "nbd_device": "/dev/nbd0", 00:07:05.392 "bdev_name": "Nvme0n1" 00:07:05.392 }, 00:07:05.392 { 00:07:05.392 "nbd_device": "/dev/nbd1", 00:07:05.392 "bdev_name": "Nvme1n1" 00:07:05.392 }, 00:07:05.392 { 00:07:05.392 "nbd_device": "/dev/nbd10", 00:07:05.392 "bdev_name": "Nvme2n1" 00:07:05.392 }, 00:07:05.392 { 00:07:05.392 "nbd_device": "/dev/nbd11", 00:07:05.392 "bdev_name": "Nvme2n2" 00:07:05.392 }, 00:07:05.392 { 00:07:05.392 "nbd_device": "/dev/nbd12", 00:07:05.392 "bdev_name": "Nvme2n3" 00:07:05.392 }, 00:07:05.392 { 00:07:05.392 "nbd_device": "/dev/nbd13", 00:07:05.392 "bdev_name": "Nvme3n1" 00:07:05.392 } 00:07:05.392 ]' 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.392 /dev/nbd1 00:07:05.392 /dev/nbd10 00:07:05.392 /dev/nbd11 00:07:05.392 /dev/nbd12 00:07:05.392 /dev/nbd13' 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.392 /dev/nbd1 00:07:05.392 /dev/nbd10 00:07:05.392 /dev/nbd11 00:07:05.392 /dev/nbd12 00:07:05.392 /dev/nbd13' 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@65 -- # count=6 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@95 -- # count=6 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:05.392 256+0 records in 00:07:05.392 256+0 records out 00:07:05.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00739768 s, 142 MB/s 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.392 10:34:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.653 256+0 records in 00:07:05.653 256+0 records out 00:07:05.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108505 s, 9.7 MB/s 00:07:05.653 10:34:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.653 10:34:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.653 256+0 records in 00:07:05.653 256+0 records out 00:07:05.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0787429 s, 13.3 MB/s 00:07:05.653 10:34:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.653 10:34:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:05.653 256+0 records in 00:07:05.653 256+0 records out 00:07:05.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102011 s, 10.3 MB/s 00:07:05.653 10:34:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.654 10:34:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:05.914 256+0 records in 00:07:05.914 256+0 records out 00:07:05.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0703196 s, 14.9 MB/s 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:05.914 256+0 records in 00:07:05.914 256+0 records out 00:07:05.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0883895 s, 11.9 MB/s 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:05.914 256+0 records in 00:07:05.914 256+0 records out 00:07:05.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0762586 s, 13.8 MB/s 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.914 10:34:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@51 -- # local i 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@41 -- # break 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.175 10:34:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.435 10:34:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.435 10:34:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.435 10:34:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.435 10:34:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.435 10:34:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.435 10:34:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.435 10:34:36 -- bdev/nbd_common.sh@41 -- # break 00:07:06.435 10:34:36 -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.435 10:34:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.435 10:34:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:06.695 10:34:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:06.695 10:34:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:06.695 10:34:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:06.695 10:34:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.695 10:34:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.695 10:34:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:06.695 10:34:37 -- bdev/nbd_common.sh@41 -- # break 00:07:06.695 10:34:37 -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.695 10:34:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.695 10:34:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@41 -- # break 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@41 -- # break 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.955 10:34:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:07.215 10:34:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:07.215 10:34:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:07.215 10:34:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:07.215 10:34:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.215 10:34:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.215 10:34:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:07.215 10:34:37 -- bdev/nbd_common.sh@41 -- # break 00:07:07.215 10:34:37 -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.215 10:34:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.215 10:34:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.215 10:34:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@65 -- # true 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@104 -- # count=0 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@109 -- # return 0 00:07:07.477 10:34:37 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:07.477 10:34:37 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:07.739 malloc_lvol_verify 00:07:07.739 10:34:38 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:07.999 2e369308-fcb9-44ea-99b5-3ffc9475a0a2 00:07:07.999 10:34:38 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:07.999 4ca92a90-09b3-4f51-b13e-835967545d32 00:07:07.999 10:34:38 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:08.260 /dev/nbd0 00:07:08.260 10:34:38 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:08.260 mke2fs 1.47.0 (5-Feb-2023) 00:07:08.260 Discarding device blocks: 0/4096 done 00:07:08.260 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:08.260 00:07:08.260 Allocating group tables: 0/1 done 00:07:08.260 Writing inode tables: 0/1 done 00:07:08.260 Creating journal (1024 blocks): done 00:07:08.260 Writing superblocks and filesystem accounting information: 0/1 done 00:07:08.260 00:07:08.260 10:34:38 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:08.260 10:34:38 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:08.260 10:34:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.260 10:34:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:08.260 10:34:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.260 10:34:38 -- bdev/nbd_common.sh@51 -- # local i 00:07:08.260 10:34:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.260 10:34:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:08.521 10:34:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.522 10:34:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.522 10:34:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.522 10:34:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.522 10:34:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.522 10:34:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.522 10:34:38 -- bdev/nbd_common.sh@41 -- # break 00:07:08.522 10:34:38 -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.522 10:34:38 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:08.522 10:34:38 -- bdev/nbd_common.sh@147 -- # return 0 00:07:08.522 10:34:38 -- bdev/blockdev.sh@324 -- # killprocess 60449 00:07:08.522 10:34:38 -- common/autotest_common.sh@936 -- # '[' -z 60449 ']' 00:07:08.522 10:34:38 -- common/autotest_common.sh@940 -- # kill -0 60449 00:07:08.522 10:34:38 -- common/autotest_common.sh@941 -- # uname 00:07:08.522 10:34:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:08.522 10:34:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60449 00:07:08.522 10:34:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:08.522 10:34:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:08.522 killing process with pid 60449 00:07:08.522 10:34:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60449' 00:07:08.522 10:34:39 -- common/autotest_common.sh@955 -- # kill 60449 00:07:08.522 10:34:39 -- common/autotest_common.sh@960 -- # wait 60449 00:07:09.463 10:34:39 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:07:09.463 00:07:09.463 real 0m10.034s 00:07:09.463 user 0m14.067s 00:07:09.463 sys 0m2.909s 00:07:09.463 10:34:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.463 10:34:39 -- common/autotest_common.sh@10 -- # set +x 00:07:09.463 ************************************ 00:07:09.463 END TEST bdev_nbd 00:07:09.463 ************************************ 00:07:09.463 10:34:39 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:07:09.463 10:34:39 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:07:09.463 skipping fio tests on NVMe due to multi-ns failures. 00:07:09.463 10:34:39 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:09.463 10:34:39 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:09.463 10:34:39 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:09.463 10:34:39 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:09.463 10:34:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.463 10:34:39 -- common/autotest_common.sh@10 -- # set +x 00:07:09.463 ************************************ 00:07:09.463 START TEST bdev_verify 00:07:09.463 ************************************ 00:07:09.463 10:34:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:09.463 [2024-12-03 10:34:40.037865] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.463 [2024-12-03 10:34:40.037975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60824 ] 00:07:09.725 [2024-12-03 10:34:40.188912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.986 [2024-12-03 10:34:40.396016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.986 [2024-12-03 10:34:40.396034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.559 Running I/O for 5 seconds... 00:07:15.850 00:07:15.850 Latency(us) 00:07:15.850 [2024-12-03T10:34:46.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.850 [2024-12-03T10:34:46.463Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.850 Verification LBA range: start 0x0 length 0xbd0bd 00:07:15.850 Nvme0n1 : 5.04 2828.13 11.05 0.00 0.00 45100.94 10687.41 48194.17 00:07:15.850 [2024-12-03T10:34:46.463Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.850 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:15.850 Nvme0n1 : 5.05 2803.40 10.95 0.00 0.00 45363.95 6755.25 53235.40 00:07:15.850 [2024-12-03T10:34:46.463Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.850 Verification LBA range: start 0x0 length 0xa0000 00:07:15.850 Nvme1n1 : 5.04 2833.68 11.07 0.00 0.00 45028.26 3352.42 46782.62 00:07:15.850 [2024-12-03T10:34:46.463Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.850 Verification LBA range: start 0xa0000 length 0xa0000 00:07:15.850 Nvme1n1 : 5.06 2801.50 10.94 0.00 0.00 45343.07 9527.93 52025.50 00:07:15.850 [2024-12-03T10:34:46.463Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.850 Verification LBA range: start 0x0 length 0x80000 00:07:15.850 Nvme2n1 : 5.05 2832.95 11.07 0.00 0.00 44966.77 3856.54 47589.22 00:07:15.850 [2024-12-03T10:34:46.463Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.850 Verification LBA range: start 0x80000 length 0x80000 00:07:15.850 Nvme2n1 : 5.06 2800.83 10.94 0.00 0.00 45308.36 9779.99 51622.20 00:07:15.850 [2024-12-03T10:34:46.463Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.850 Verification LBA range: start 0x0 length 0x80000 00:07:15.850 Nvme2n2 : 5.05 2831.92 11.06 0.00 0.00 44942.95 5016.02 48194.17 00:07:15.850 [2024-12-03T10:34:46.463Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.850 Verification LBA range: start 0x80000 length 0x80000 00:07:15.850 Nvme2n2 : 5.04 2801.21 10.94 0.00 0.00 45571.96 6200.71 52025.50 00:07:15.850 [2024-12-03T10:34:46.463Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.850 Verification LBA range: start 0x0 length 0x80000 00:07:15.850 Nvme2n3 : 5.05 2829.50 11.05 0.00 0.00 44918.28 8670.92 46177.67 00:07:15.850 [2024-12-03T10:34:46.463Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.850 Verification LBA range: start 0x80000 length 0x80000 00:07:15.850 Nvme2n3 : 5.04 2800.44 10.94 0.00 0.00 45543.50 6402.36 53235.40 00:07:15.850 [2024-12-03T10:34:46.463Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.850 Verification LBA range: start 0x0 length 0x20000 00:07:15.850 Nvme3n1 : 5.06 2836.20 11.08 0.00 0.00 44799.82 882.22 45572.73 00:07:15.850 [2024-12-03T10:34:46.463Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.850 Verification LBA range: start 0x20000 length 0x20000 00:07:15.850 Nvme3n1 : 5.05 2805.96 10.96 0.00 0.00 45391.45 3150.77 53235.40 00:07:15.850 [2024-12-03T10:34:46.463Z] =================================================================================================================== 00:07:15.850 [2024-12-03T10:34:46.463Z] Total : 33805.72 132.05 0.00 0.00 45188.62 882.22 53235.40 00:07:37.824 00:07:37.824 real 0m25.021s 00:07:37.824 user 0m33.400s 00:07:37.824 sys 0m0.528s 00:07:37.824 10:35:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.824 ************************************ 00:07:37.824 END TEST bdev_verify 00:07:37.824 ************************************ 00:07:37.824 10:35:05 -- common/autotest_common.sh@10 -- # set +x 00:07:37.824 10:35:05 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:37.824 10:35:05 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:37.824 10:35:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.824 10:35:05 -- common/autotest_common.sh@10 -- # set +x 00:07:37.824 ************************************ 00:07:37.824 START TEST bdev_verify_big_io 00:07:37.824 ************************************ 00:07:37.824 10:35:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:37.824 [2024-12-03 10:35:05.123421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.824 [2024-12-03 10:35:05.123542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61028 ] 00:07:37.824 [2024-12-03 10:35:05.270739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:37.824 [2024-12-03 10:35:05.471916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.824 [2024-12-03 10:35:05.472070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.824 Running I/O for 5 seconds... 00:07:41.136 00:07:41.136 Latency(us) 00:07:41.136 [2024-12-03T10:35:11.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.136 [2024-12-03T10:35:11.749Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:41.136 Verification LBA range: start 0x0 length 0xbd0b 00:07:41.136 Nvme0n1 : 5.37 233.27 14.58 0.00 0.00 531433.11 98001.53 751748.33 00:07:41.136 [2024-12-03T10:35:11.749Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:41.136 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:41.136 Nvme0n1 : 5.36 251.48 15.72 0.00 0.00 499442.47 50412.31 693673.35 00:07:41.136 [2024-12-03T10:35:11.749Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:41.136 Verification LBA range: start 0x0 length 0xa000 00:07:41.136 Nvme1n1 : 5.41 239.88 14.99 0.00 0.00 514440.61 32667.18 683994.19 00:07:41.136 [2024-12-03T10:35:11.749Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:41.136 Verification LBA range: start 0xa000 length 0xa000 00:07:41.136 Nvme1n1 : 5.37 251.39 15.71 0.00 0.00 492487.74 51218.90 635598.38 00:07:41.136 [2024-12-03T10:35:11.749Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:41.136 Verification LBA range: start 0x0 length 0x8000 00:07:41.136 Nvme2n1 : 5.42 248.77 15.55 0.00 0.00 492438.00 13208.02 616240.05 00:07:41.136 [2024-12-03T10:35:11.749Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:41.136 Verification LBA range: start 0x8000 length 0x8000 00:07:41.136 Nvme2n1 : 5.38 258.36 16.15 0.00 0.00 475127.85 15526.99 580749.78 00:07:41.136 [2024-12-03T10:35:11.749Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:41.136 Verification LBA range: start 0x0 length 0x8000 00:07:41.136 Nvme2n2 : 5.43 248.63 15.54 0.00 0.00 484788.56 14115.45 564617.85 00:07:41.136 [2024-12-03T10:35:11.749Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:41.136 Verification LBA range: start 0x8000 length 0x8000 00:07:41.136 Nvme2n2 : 5.39 258.27 16.14 0.00 0.00 468288.73 16131.94 522674.81 00:07:41.136 [2024-12-03T10:35:11.749Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:41.136 Verification LBA range: start 0x0 length 0x8000 00:07:41.136 Nvme2n3 : 5.44 255.76 15.98 0.00 0.00 464458.34 10637.00 487184.54 00:07:41.136 [2024-12-03T10:35:11.749Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:41.136 Verification LBA range: start 0x8000 length 0x8000 00:07:41.136 Nvme2n3 : 5.41 264.27 16.52 0.00 0.00 451293.66 22685.54 542033.13 00:07:41.136 [2024-12-03T10:35:11.749Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:41.136 Verification LBA range: start 0x0 length 0x2000 00:07:41.136 Nvme3n1 : 5.45 271.20 16.95 0.00 0.00 431551.58 5167.26 583976.17 00:07:41.136 [2024-12-03T10:35:11.749Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:41.136 Verification LBA range: start 0x2000 length 0x2000 00:07:41.136 Nvme3n1 : 5.42 281.07 17.57 0.00 0.00 419631.07 4209.43 548485.91 00:07:41.136 [2024-12-03T10:35:11.749Z] =================================================================================================================== 00:07:41.136 [2024-12-03T10:35:11.749Z] Total : 3062.34 191.40 0.00 0.00 475533.96 4209.43 751748.33 00:07:43.069 00:07:43.069 real 0m8.134s 00:07:43.069 user 0m15.032s 00:07:43.069 sys 0m0.275s 00:07:43.069 10:35:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.069 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:07:43.069 ************************************ 00:07:43.069 END TEST bdev_verify_big_io 00:07:43.069 ************************************ 00:07:43.069 10:35:13 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.069 10:35:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:43.069 10:35:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.069 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:07:43.069 ************************************ 00:07:43.069 START TEST bdev_write_zeroes 00:07:43.069 ************************************ 00:07:43.069 10:35:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.069 [2024-12-03 10:35:13.298770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.069 [2024-12-03 10:35:13.298889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61137 ] 00:07:43.069 [2024-12-03 10:35:13.448405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.069 [2024-12-03 10:35:13.665915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.646 Running I/O for 1 seconds... 00:07:45.032 00:07:45.032 Latency(us) 00:07:45.032 [2024-12-03T10:35:15.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.032 [2024-12-03T10:35:15.645Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:45.032 Nvme0n1 : 1.01 9210.90 35.98 0.00 0.00 13856.58 4814.38 25206.15 00:07:45.032 [2024-12-03T10:35:15.645Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:45.032 Nvme1n1 : 1.02 9200.16 35.94 0.00 0.00 13853.57 9427.10 25105.33 00:07:45.032 [2024-12-03T10:35:15.645Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:45.032 Nvme2n1 : 1.02 9231.69 36.06 0.00 0.00 13734.95 7309.78 20769.87 00:07:45.032 [2024-12-03T10:35:15.645Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:45.032 Nvme2n2 : 1.02 9221.20 36.02 0.00 0.00 13713.54 7813.91 21475.64 00:07:45.032 [2024-12-03T10:35:15.645Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:45.032 Nvme2n3 : 1.02 9258.96 36.17 0.00 0.00 13628.02 5066.44 21072.34 00:07:45.032 [2024-12-03T10:35:15.645Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:45.032 Nvme3n1 : 1.02 9248.55 36.13 0.00 0.00 13608.35 5444.53 20669.05 00:07:45.032 [2024-12-03T10:35:15.645Z] =================================================================================================================== 00:07:45.032 [2024-12-03T10:35:15.645Z] Total : 55371.46 216.29 0.00 0.00 13731.96 4814.38 25206.15 00:07:45.605 00:07:45.605 real 0m2.915s 00:07:45.605 user 0m2.576s 00:07:45.605 sys 0m0.219s 00:07:45.605 ************************************ 00:07:45.605 10:35:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.605 10:35:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.605 END TEST bdev_write_zeroes 00:07:45.605 ************************************ 00:07:45.605 10:35:16 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.605 10:35:16 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:45.605 10:35:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.605 10:35:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.605 ************************************ 00:07:45.605 START TEST bdev_json_nonenclosed 00:07:45.605 ************************************ 00:07:45.605 10:35:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.866 [2024-12-03 10:35:16.271852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.866 [2024-12-03 10:35:16.271967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61190 ] 00:07:45.866 [2024-12-03 10:35:16.421827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.126 [2024-12-03 10:35:16.642137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.126 [2024-12-03 10:35:16.642328] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:46.126 [2024-12-03 10:35:16.642349] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.386 00:07:46.386 real 0m0.744s 00:07:46.386 user 0m0.530s 00:07:46.386 sys 0m0.107s 00:07:46.386 10:35:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.386 ************************************ 00:07:46.386 END TEST bdev_json_nonenclosed 00:07:46.386 ************************************ 00:07:46.386 10:35:16 -- common/autotest_common.sh@10 -- # set +x 00:07:46.647 10:35:17 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.647 10:35:17 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:46.647 10:35:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.647 10:35:17 -- common/autotest_common.sh@10 -- # set +x 00:07:46.647 ************************************ 00:07:46.647 START TEST bdev_json_nonarray 00:07:46.647 ************************************ 00:07:46.647 10:35:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.647 [2024-12-03 10:35:17.089144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.647 [2024-12-03 10:35:17.089289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61221 ] 00:07:46.647 [2024-12-03 10:35:17.244592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.217 [2024-12-03 10:35:17.543756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.217 [2024-12-03 10:35:17.544031] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:47.217 [2024-12-03 10:35:17.544077] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.478 00:07:47.478 real 0m0.898s 00:07:47.478 user 0m0.644s 00:07:47.478 sys 0m0.140s 00:07:47.478 10:35:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.478 ************************************ 00:07:47.478 END TEST bdev_json_nonarray 00:07:47.478 ************************************ 00:07:47.478 10:35:17 -- common/autotest_common.sh@10 -- # set +x 00:07:47.478 10:35:17 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:07:47.478 10:35:17 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:07:47.478 10:35:17 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:07:47.478 10:35:17 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:07:47.478 10:35:17 -- bdev/blockdev.sh@809 -- # cleanup 00:07:47.478 10:35:17 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:47.478 10:35:17 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:47.478 10:35:17 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:07:47.478 10:35:17 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:07:47.478 10:35:17 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:07:47.478 10:35:17 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:07:47.478 00:07:47.478 real 0m56.500s 00:07:47.478 user 1m18.933s 00:07:47.478 sys 0m5.375s 00:07:47.478 10:35:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.478 ************************************ 00:07:47.478 END TEST blockdev_nvme 00:07:47.478 10:35:17 -- common/autotest_common.sh@10 -- # set +x 00:07:47.478 ************************************ 00:07:47.478 10:35:18 -- spdk/autotest.sh@206 -- # uname -s 00:07:47.478 10:35:18 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:07:47.478 10:35:18 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:47.478 10:35:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:47.478 10:35:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.478 10:35:18 -- common/autotest_common.sh@10 -- # set +x 00:07:47.478 ************************************ 00:07:47.478 START TEST blockdev_nvme_gpt 00:07:47.478 ************************************ 00:07:47.479 10:35:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:47.741 * Looking for test storage... 00:07:47.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:47.741 10:35:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:47.741 10:35:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:47.741 10:35:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:47.741 10:35:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:47.741 10:35:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:47.741 10:35:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:47.741 10:35:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:47.741 10:35:18 -- scripts/common.sh@335 -- # IFS=.-: 00:07:47.741 10:35:18 -- scripts/common.sh@335 -- # read -ra ver1 00:07:47.741 10:35:18 -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.741 10:35:18 -- scripts/common.sh@336 -- # read -ra ver2 00:07:47.741 10:35:18 -- scripts/common.sh@337 -- # local 'op=<' 00:07:47.741 10:35:18 -- scripts/common.sh@339 -- # ver1_l=2 00:07:47.741 10:35:18 -- scripts/common.sh@340 -- # ver2_l=1 00:07:47.741 10:35:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:47.741 10:35:18 -- scripts/common.sh@343 -- # case "$op" in 00:07:47.741 10:35:18 -- scripts/common.sh@344 -- # : 1 00:07:47.741 10:35:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:47.741 10:35:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.741 10:35:18 -- scripts/common.sh@364 -- # decimal 1 00:07:47.741 10:35:18 -- scripts/common.sh@352 -- # local d=1 00:07:47.741 10:35:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.741 10:35:18 -- scripts/common.sh@354 -- # echo 1 00:07:47.741 10:35:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:47.741 10:35:18 -- scripts/common.sh@365 -- # decimal 2 00:07:47.741 10:35:18 -- scripts/common.sh@352 -- # local d=2 00:07:47.741 10:35:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.741 10:35:18 -- scripts/common.sh@354 -- # echo 2 00:07:47.741 10:35:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:47.741 10:35:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:47.741 10:35:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:47.741 10:35:18 -- scripts/common.sh@367 -- # return 0 00:07:47.741 10:35:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.741 10:35:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:47.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.741 --rc genhtml_branch_coverage=1 00:07:47.741 --rc genhtml_function_coverage=1 00:07:47.741 --rc genhtml_legend=1 00:07:47.741 --rc geninfo_all_blocks=1 00:07:47.741 --rc geninfo_unexecuted_blocks=1 00:07:47.741 00:07:47.741 ' 00:07:47.741 10:35:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:47.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.741 --rc genhtml_branch_coverage=1 00:07:47.741 --rc genhtml_function_coverage=1 00:07:47.741 --rc genhtml_legend=1 00:07:47.741 --rc geninfo_all_blocks=1 00:07:47.741 --rc geninfo_unexecuted_blocks=1 00:07:47.741 00:07:47.741 ' 00:07:47.741 10:35:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:47.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.741 --rc genhtml_branch_coverage=1 00:07:47.741 --rc genhtml_function_coverage=1 00:07:47.741 --rc genhtml_legend=1 00:07:47.741 --rc geninfo_all_blocks=1 00:07:47.741 --rc geninfo_unexecuted_blocks=1 00:07:47.741 00:07:47.741 ' 00:07:47.741 10:35:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:47.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.741 --rc genhtml_branch_coverage=1 00:07:47.741 --rc genhtml_function_coverage=1 00:07:47.741 --rc genhtml_legend=1 00:07:47.741 --rc geninfo_all_blocks=1 00:07:47.741 --rc geninfo_unexecuted_blocks=1 00:07:47.741 00:07:47.741 ' 00:07:47.741 10:35:18 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:47.741 10:35:18 -- bdev/nbd_common.sh@6 -- # set -e 00:07:47.741 10:35:18 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:47.741 10:35:18 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:47.741 10:35:18 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:47.741 10:35:18 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:47.741 10:35:18 -- bdev/blockdev.sh@18 -- # : 00:07:47.741 10:35:18 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:47.741 10:35:18 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:47.741 10:35:18 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:47.741 10:35:18 -- bdev/blockdev.sh@672 -- # uname -s 00:07:47.741 10:35:18 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:47.741 10:35:18 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:47.741 10:35:18 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:07:47.741 10:35:18 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:47.741 10:35:18 -- bdev/blockdev.sh@682 -- # dek= 00:07:47.741 10:35:18 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:47.741 10:35:18 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:47.741 10:35:18 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:47.741 10:35:18 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:07:47.741 10:35:18 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:07:47.741 10:35:18 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:47.741 10:35:18 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61304 00:07:47.741 10:35:18 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:47.741 10:35:18 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:47.741 10:35:18 -- bdev/blockdev.sh@47 -- # waitforlisten 61304 00:07:47.741 10:35:18 -- common/autotest_common.sh@829 -- # '[' -z 61304 ']' 00:07:47.741 10:35:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.741 10:35:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.741 10:35:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.741 10:35:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.741 10:35:18 -- common/autotest_common.sh@10 -- # set +x 00:07:47.742 [2024-12-03 10:35:18.292035] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.742 [2024-12-03 10:35:18.292197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61304 ] 00:07:48.004 [2024-12-03 10:35:18.445686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.267 [2024-12-03 10:35:18.734911] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:48.267 [2024-12-03 10:35:18.735204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.285 10:35:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.285 10:35:19 -- common/autotest_common.sh@862 -- # return 0 00:07:49.285 10:35:19 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:49.285 10:35:19 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:07:49.285 10:35:19 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:49.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:49.859 Waiting for block devices as requested 00:07:49.859 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:07:50.120 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:07:50.120 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:07:50.120 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:07:55.409 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:07:55.409 10:35:25 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:07:55.409 10:35:25 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:07:55.409 10:35:25 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:07:55.409 10:35:25 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:07:55.409 10:35:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:55.409 10:35:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:07:55.409 10:35:25 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:07:55.409 10:35:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:07:55.409 10:35:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:55.409 10:35:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:55.409 10:35:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:07:55.409 10:35:25 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:07:55.409 10:35:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:55.409 10:35:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:55.409 10:35:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:55.409 10:35:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:07:55.409 10:35:25 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:07:55.409 10:35:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:55.409 10:35:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:55.409 10:35:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:55.409 10:35:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:07:55.409 10:35:25 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:07:55.409 10:35:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:55.409 10:35:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:55.409 10:35:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:55.409 10:35:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:07:55.409 10:35:25 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:07:55.409 10:35:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:55.409 10:35:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:55.409 10:35:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:55.409 10:35:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:07:55.409 10:35:25 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:07:55.409 10:35:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:55.409 10:35:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:55.409 10:35:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:55.409 10:35:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:07:55.409 10:35:25 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:07:55.409 10:35:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:55.409 10:35:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:55.409 10:35:25 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:07:55.409 10:35:25 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:07:55.409 10:35:25 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:07:55.409 10:35:25 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:55.409 10:35:25 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:07:55.409 10:35:25 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:07:55.409 10:35:25 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:07:55.409 10:35:25 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:07:55.409 BYT; 00:07:55.409 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:55.409 10:35:25 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:07:55.409 BYT; 00:07:55.409 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:55.409 10:35:25 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:07:55.409 10:35:25 -- bdev/blockdev.sh@114 -- # break 00:07:55.409 10:35:25 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:07:55.409 10:35:25 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:55.409 10:35:25 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:55.409 10:35:25 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:55.409 10:35:25 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:07:55.409 10:35:25 -- scripts/common.sh@410 -- # local spdk_guid 00:07:55.409 10:35:25 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:55.409 10:35:25 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:55.409 10:35:25 -- scripts/common.sh@415 -- # IFS='()' 00:07:55.410 10:35:25 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:07:55.410 10:35:25 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:55.410 10:35:25 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:55.410 10:35:25 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:55.410 10:35:25 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:55.410 10:35:25 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:55.410 10:35:25 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:07:55.410 10:35:25 -- scripts/common.sh@422 -- # local spdk_guid 00:07:55.410 10:35:25 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:55.410 10:35:25 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:55.410 10:35:25 -- scripts/common.sh@427 -- # IFS='()' 00:07:55.410 10:35:25 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:07:55.410 10:35:25 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:55.410 10:35:25 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:55.410 10:35:25 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:55.410 10:35:25 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:55.410 10:35:25 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:55.410 10:35:25 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:07:56.349 The operation has completed successfully. 00:07:56.349 10:35:26 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:07:57.289 The operation has completed successfully. 00:07:57.289 10:35:27 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:58.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:58.224 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:07:58.224 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:07:58.224 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:07:58.482 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:07:58.482 10:35:28 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:07:58.482 10:35:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.482 10:35:28 -- common/autotest_common.sh@10 -- # set +x 00:07:58.482 [] 00:07:58.482 10:35:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.482 10:35:28 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:07:58.482 10:35:28 -- bdev/blockdev.sh@79 -- # local json 00:07:58.482 10:35:28 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:58.482 10:35:28 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:58.482 10:35:28 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:58.482 10:35:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.482 10:35:28 -- common/autotest_common.sh@10 -- # set +x 00:07:58.741 10:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.741 10:35:29 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:58.741 10:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.741 10:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:58.741 10:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.741 10:35:29 -- bdev/blockdev.sh@738 -- # cat 00:07:58.741 10:35:29 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:58.741 10:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.741 10:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:58.741 10:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.741 10:35:29 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:58.741 10:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.741 10:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:58.741 10:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.741 10:35:29 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:58.741 10:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.741 10:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:58.741 10:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.741 10:35:29 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:58.741 10:35:29 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:58.741 10:35:29 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:58.741 10:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.741 10:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:58.741 10:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.741 10:35:29 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:58.741 10:35:29 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:58.742 10:35:29 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "8d439174-c658-47e9-bc83-15a8efcd1b77"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8d439174-c658-47e9-bc83-15a8efcd1b77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5461524e-aeb0-4ef1-969d-6f339888dddb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5461524e-aeb0-4ef1-969d-6f339888dddb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "866cf5ae-6334-45d9-a88d-af6a4ea365a6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "866cf5ae-6334-45d9-a88d-af6a4ea365a6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c8d612e1-6b24-4804-bef7-91bfd2367f03"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c8d612e1-6b24-4804-bef7-91bfd2367f03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5c303b5d-026f-4ed0-a399-3e7e6f7c103d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5c303b5d-026f-4ed0-a399-3e7e6f7c103d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:58.742 10:35:29 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:58.742 10:35:29 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:07:58.742 10:35:29 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:58.742 10:35:29 -- bdev/blockdev.sh@752 -- # killprocess 61304 00:07:58.742 10:35:29 -- common/autotest_common.sh@936 -- # '[' -z 61304 ']' 00:07:58.742 10:35:29 -- common/autotest_common.sh@940 -- # kill -0 61304 00:07:58.742 10:35:29 -- common/autotest_common.sh@941 -- # uname 00:07:58.742 10:35:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:59.000 10:35:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61304 00:07:59.000 killing process with pid 61304 00:07:59.000 10:35:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:59.000 10:35:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:59.000 10:35:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61304' 00:07:59.000 10:35:29 -- common/autotest_common.sh@955 -- # kill 61304 00:07:59.000 10:35:29 -- common/autotest_common.sh@960 -- # wait 61304 00:08:00.394 10:35:30 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:00.394 10:35:30 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:00.394 10:35:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:00.394 10:35:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.394 10:35:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.394 ************************************ 00:08:00.394 START TEST bdev_hello_world 00:08:00.394 ************************************ 00:08:00.394 10:35:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:00.394 [2024-12-03 10:35:30.973601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:00.394 [2024-12-03 10:35:30.973774] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61964 ] 00:08:00.652 [2024-12-03 10:35:31.140617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.912 [2024-12-03 10:35:31.365385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.498 [2024-12-03 10:35:31.943588] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:01.498 [2024-12-03 10:35:31.943673] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:01.498 [2024-12-03 10:35:31.943705] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:01.498 [2024-12-03 10:35:31.946795] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:01.498 [2024-12-03 10:35:31.947900] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:01.498 [2024-12-03 10:35:31.948191] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:01.498 [2024-12-03 10:35:31.948689] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:01.498 00:08:01.498 [2024-12-03 10:35:31.948731] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:02.435 00:08:02.435 real 0m1.946s 00:08:02.435 user 0m1.586s 00:08:02.435 sys 0m0.247s 00:08:02.435 ************************************ 00:08:02.435 END TEST bdev_hello_world 00:08:02.435 ************************************ 00:08:02.435 10:35:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.435 10:35:32 -- common/autotest_common.sh@10 -- # set +x 00:08:02.435 10:35:32 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:02.435 10:35:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:02.435 10:35:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.435 10:35:32 -- common/autotest_common.sh@10 -- # set +x 00:08:02.435 ************************************ 00:08:02.435 START TEST bdev_bounds 00:08:02.435 ************************************ 00:08:02.435 10:35:32 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:08:02.435 10:35:32 -- bdev/blockdev.sh@288 -- # bdevio_pid=62006 00:08:02.435 Process bdevio pid: 62006 00:08:02.435 10:35:32 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:02.435 10:35:32 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 62006' 00:08:02.435 10:35:32 -- bdev/blockdev.sh@291 -- # waitforlisten 62006 00:08:02.435 10:35:32 -- common/autotest_common.sh@829 -- # '[' -z 62006 ']' 00:08:02.435 10:35:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.435 10:35:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:02.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.435 10:35:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.435 10:35:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:02.435 10:35:32 -- common/autotest_common.sh@10 -- # set +x 00:08:02.435 10:35:32 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:02.435 [2024-12-03 10:35:32.959603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.435 [2024-12-03 10:35:32.959735] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62006 ] 00:08:02.693 [2024-12-03 10:35:33.109556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.952 [2024-12-03 10:35:33.325884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.952 [2024-12-03 10:35:33.326041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.952 [2024-12-03 10:35:33.326043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.887 10:35:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.887 10:35:34 -- common/autotest_common.sh@862 -- # return 0 00:08:03.887 10:35:34 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:04.146 I/O targets: 00:08:04.146 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:04.146 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:04.146 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:04.146 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:04.146 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:04.146 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:04.146 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:04.146 00:08:04.146 00:08:04.146 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.146 http://cunit.sourceforge.net/ 00:08:04.146 00:08:04.146 00:08:04.146 Suite: bdevio tests on: Nvme3n1 00:08:04.146 Test: blockdev write read block ...passed 00:08:04.146 Test: blockdev write zeroes read block ...passed 00:08:04.146 Test: blockdev write zeroes read no split ...passed 00:08:04.146 Test: blockdev write zeroes read split ...passed 00:08:04.146 Test: blockdev write zeroes read split partial ...passed 00:08:04.146 Test: blockdev reset ...[2024-12-03 10:35:34.589324] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:04.146 passed 00:08:04.146 Test: blockdev write read 8 blocks ...[2024-12-03 10:35:34.593490] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:04.146 passed 00:08:04.146 Test: blockdev write read size > 128k ...passed 00:08:04.146 Test: blockdev write read invalid size ...passed 00:08:04.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:04.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:04.146 Test: blockdev write read max offset ...passed 00:08:04.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:04.147 Test: blockdev writev readv 8 blocks ...passed 00:08:04.147 Test: blockdev writev readv 30 x 1block ...passed 00:08:04.147 Test: blockdev writev readv block ...passed 00:08:04.147 Test: blockdev writev readv size > 128k ...passed 00:08:04.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:04.147 Test: blockdev comparev and writev ...[2024-12-03 10:35:34.608926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27000a000 len:0x1000 00:08:04.147 [2024-12-03 10:35:34.608980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:04.147 passed 00:08:04.147 Test: blockdev nvme passthru rw ...passed 00:08:04.147 Test: blockdev nvme passthru vendor specific ...passed 00:08:04.147 Test: blockdev nvme admin passthru ...[2024-12-03 10:35:34.611323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:04.147 [2024-12-03 10:35:34.611366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:04.147 passed 00:08:04.147 Test: blockdev copy ...passed 00:08:04.147 Suite: bdevio tests on: Nvme2n3 00:08:04.147 Test: blockdev write read block ...passed 00:08:04.147 Test: blockdev write zeroes read block ...passed 00:08:04.147 Test: blockdev write zeroes read no split ...passed 00:08:04.147 Test: blockdev write zeroes read split ...passed 00:08:04.147 Test: blockdev write zeroes read split partial ...passed 00:08:04.147 Test: blockdev reset ...[2024-12-03 10:35:34.669015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:04.147 passed 00:08:04.147 Test: blockdev write read 8 blocks ...[2024-12-03 10:35:34.672457] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:04.147 passed 00:08:04.147 Test: blockdev write read size > 128k ...passed 00:08:04.147 Test: blockdev write read invalid size ...passed 00:08:04.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:04.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:04.147 Test: blockdev write read max offset ...passed 00:08:04.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:04.147 Test: blockdev writev readv 8 blocks ...passed 00:08:04.147 Test: blockdev writev readv 30 x 1block ...passed 00:08:04.147 Test: blockdev writev readv block ...passed 00:08:04.147 Test: blockdev writev readv size > 128k ...passed 00:08:04.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:04.147 Test: blockdev comparev and writev ...[2024-12-03 10:35:34.686265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x253104000 len:0x1000 00:08:04.147 [2024-12-03 10:35:34.686312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:04.147 passed 00:08:04.147 Test: blockdev nvme passthru rw ...passed 00:08:04.147 Test: blockdev nvme passthru vendor specific ...passed 00:08:04.147 Test: blockdev nvme admin passthru ...[2024-12-03 10:35:34.688378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:04.147 [2024-12-03 10:35:34.688407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:04.147 passed 00:08:04.147 Test: blockdev copy ...passed 00:08:04.147 Suite: bdevio tests on: Nvme2n2 00:08:04.147 Test: blockdev write read block ...passed 00:08:04.147 Test: blockdev write zeroes read block ...passed 00:08:04.147 Test: blockdev write zeroes read no split ...passed 00:08:04.147 Test: blockdev write zeroes read split ...passed 00:08:04.147 Test: blockdev write zeroes read split partial ...passed 00:08:04.147 Test: blockdev reset ...[2024-12-03 10:35:34.738588] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:04.147 passed 00:08:04.147 Test: blockdev write read 8 blocks ...[2024-12-03 10:35:34.742418] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:04.147 passed 00:08:04.147 Test: blockdev write read size > 128k ...passed 00:08:04.147 Test: blockdev write read invalid size ...passed 00:08:04.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:04.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:04.147 Test: blockdev write read max offset ...passed 00:08:04.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:04.147 Test: blockdev writev readv 8 blocks ...passed 00:08:04.147 Test: blockdev writev readv 30 x 1block ...passed 00:08:04.147 Test: blockdev writev readv block ...passed 00:08:04.147 Test: blockdev writev readv size > 128k ...passed 00:08:04.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:04.147 Test: blockdev comparev and writev ...[2024-12-03 10:35:34.757229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x253104000 len:0x1000 00:08:04.147 [2024-12-03 10:35:34.757335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:04.406 passed 00:08:04.406 Test: blockdev nvme passthru rw ...passed 00:08:04.406 Test: blockdev nvme passthru vendor specific ...[2024-12-03 10:35:34.759570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:04.406 [2024-12-03 10:35:34.759634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:04.406 passed 00:08:04.406 Test: blockdev nvme admin passthru ...passed 00:08:04.406 Test: blockdev copy ...passed 00:08:04.406 Suite: bdevio tests on: Nvme2n1 00:08:04.406 Test: blockdev write read block ...passed 00:08:04.406 Test: blockdev write zeroes read block ...passed 00:08:04.406 Test: blockdev write zeroes read no split ...passed 00:08:04.406 Test: blockdev write zeroes read split ...passed 00:08:04.406 Test: blockdev write zeroes read split partial ...passed 00:08:04.406 Test: blockdev reset ...[2024-12-03 10:35:34.818094] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:04.406 passed 00:08:04.406 Test: blockdev write read 8 blocks ...[2024-12-03 10:35:34.821725] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:04.406 passed 00:08:04.406 Test: blockdev write read size > 128k ...passed 00:08:04.406 Test: blockdev write read invalid size ...passed 00:08:04.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:04.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:04.406 Test: blockdev write read max offset ...passed 00:08:04.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:04.406 Test: blockdev writev readv 8 blocks ...passed 00:08:04.406 Test: blockdev writev readv 30 x 1block ...passed 00:08:04.406 Test: blockdev writev readv block ...passed 00:08:04.406 Test: blockdev writev readv size > 128k ...passed 00:08:04.406 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:04.406 Test: blockdev comparev and writev ...[2024-12-03 10:35:34.835307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x282a3c000 len:0x1000 00:08:04.406 [2024-12-03 10:35:34.835350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:04.406 passed 00:08:04.406 Test: blockdev nvme passthru rw ...passed 00:08:04.406 Test: blockdev nvme passthru vendor specific ...[2024-12-03 10:35:34.836691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:04.406 [2024-12-03 10:35:34.836721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:04.406 passed 00:08:04.406 Test: blockdev nvme admin passthru ...passed 00:08:04.406 Test: blockdev copy ...passed 00:08:04.406 Suite: bdevio tests on: Nvme1n1 00:08:04.406 Test: blockdev write read block ...passed 00:08:04.406 Test: blockdev write zeroes read block ...passed 00:08:04.406 Test: blockdev write zeroes read no split ...passed 00:08:04.406 Test: blockdev write zeroes read split ...passed 00:08:04.406 Test: blockdev write zeroes read split partial ...passed 00:08:04.406 Test: blockdev reset ...[2024-12-03 10:35:34.887863] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:04.406 passed 00:08:04.406 Test: blockdev write read 8 blocks ...[2024-12-03 10:35:34.891374] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:04.406 passed 00:08:04.406 Test: blockdev write read size > 128k ...passed 00:08:04.406 Test: blockdev write read invalid size ...passed 00:08:04.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:04.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:04.406 Test: blockdev write read max offset ...passed 00:08:04.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:04.406 Test: blockdev writev readv 8 blocks ...passed 00:08:04.406 Test: blockdev writev readv 30 x 1block ...passed 00:08:04.406 Test: blockdev writev readv block ...passed 00:08:04.406 Test: blockdev writev readv size > 128k ...passed 00:08:04.406 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:04.406 Test: blockdev comparev and writev ...[2024-12-03 10:35:34.905966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x282a38000 len:0x1000 00:08:04.406 [2024-12-03 10:35:34.906029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:04.406 passed 00:08:04.406 Test: blockdev nvme passthru rw ...passed 00:08:04.406 Test: blockdev nvme passthru vendor specific ...passed 00:08:04.406 Test: blockdev nvme admin passthru ...[2024-12-03 10:35:34.908228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:04.406 [2024-12-03 10:35:34.908263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:04.406 passed 00:08:04.406 Test: blockdev copy ...passed 00:08:04.406 Suite: bdevio tests on: Nvme0n1p2 00:08:04.406 Test: blockdev write read block ...passed 00:08:04.406 Test: blockdev write zeroes read block ...passed 00:08:04.406 Test: blockdev write zeroes read no split ...passed 00:08:04.406 Test: blockdev write zeroes read split ...passed 00:08:04.406 Test: blockdev write zeroes read split partial ...passed 00:08:04.406 Test: blockdev reset ...[2024-12-03 10:35:34.970474] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:04.406 passed 00:08:04.406 Test: blockdev write read 8 blocks ...[2024-12-03 10:35:34.974016] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:04.406 passed 00:08:04.406 Test: blockdev write read size > 128k ...passed 00:08:04.406 Test: blockdev write read invalid size ...passed 00:08:04.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:04.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:04.406 Test: blockdev write read max offset ...passed 00:08:04.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:04.406 Test: blockdev writev readv 8 blocks ...passed 00:08:04.406 Test: blockdev writev readv 30 x 1block ...passed 00:08:04.407 Test: blockdev writev readv block ...passed 00:08:04.407 Test: blockdev writev readv size > 128k ...passed 00:08:04.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:04.407 Test: blockdev comparev and writev ...passed 00:08:04.407 Test: blockdev nvme passthru rw ...passed 00:08:04.407 Test: blockdev nvme passthru vendor specific ...passed 00:08:04.407 Test: blockdev nvme admin passthru ...passed 00:08:04.407 Test: blockdev copy ...[2024-12-03 10:35:34.985007] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:04.407 separate metadata which is not supported yet. 00:08:04.407 passed 00:08:04.407 Suite: bdevio tests on: Nvme0n1p1 00:08:04.407 Test: blockdev write read block ...passed 00:08:04.407 Test: blockdev write zeroes read block ...passed 00:08:04.407 Test: blockdev write zeroes read no split ...passed 00:08:04.407 Test: blockdev write zeroes read split ...passed 00:08:04.664 Test: blockdev write zeroes read split partial ...passed 00:08:04.664 Test: blockdev reset ...[2024-12-03 10:35:35.036387] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:04.664 passed 00:08:04.664 Test: blockdev write read 8 blocks ...[2024-12-03 10:35:35.039977] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:04.664 passed 00:08:04.664 Test: blockdev write read size > 128k ...passed 00:08:04.664 Test: blockdev write read invalid size ...passed 00:08:04.664 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:04.664 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:04.664 Test: blockdev write read max offset ...passed 00:08:04.664 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:04.664 Test: blockdev writev readv 8 blocks ...passed 00:08:04.664 Test: blockdev writev readv 30 x 1block ...passed 00:08:04.664 Test: blockdev writev readv block ...passed 00:08:04.664 Test: blockdev writev readv size > 128k ...passed 00:08:04.664 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:04.664 Test: blockdev comparev and writev ...passed 00:08:04.664 Test: blockdev nvme passthru rw ...passed 00:08:04.664 Test: blockdev nvme passthru vendor specific ...passed 00:08:04.664 Test: blockdev nvme admin passthru ...passed 00:08:04.664 Test: blockdev copy ...[2024-12-03 10:35:35.052445] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:04.664 separate metadata which is not supported yet. 00:08:04.664 passed 00:08:04.664 00:08:04.664 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.664 suites 7 7 n/a 0 0 00:08:04.664 tests 161 161 161 0 0 00:08:04.664 asserts 1006 1006 1006 0 n/a 00:08:04.664 00:08:04.664 Elapsed time = 1.334 seconds 00:08:04.664 0 00:08:04.664 10:35:35 -- bdev/blockdev.sh@293 -- # killprocess 62006 00:08:04.664 10:35:35 -- common/autotest_common.sh@936 -- # '[' -z 62006 ']' 00:08:04.664 10:35:35 -- common/autotest_common.sh@940 -- # kill -0 62006 00:08:04.664 10:35:35 -- common/autotest_common.sh@941 -- # uname 00:08:04.664 10:35:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:04.664 10:35:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62006 00:08:04.664 10:35:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:04.664 10:35:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:04.664 killing process with pid 62006 00:08:04.664 10:35:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62006' 00:08:04.664 10:35:35 -- common/autotest_common.sh@955 -- # kill 62006 00:08:04.664 10:35:35 -- common/autotest_common.sh@960 -- # wait 62006 00:08:05.230 10:35:35 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:05.230 00:08:05.230 real 0m2.815s 00:08:05.230 user 0m7.166s 00:08:05.230 sys 0m0.355s 00:08:05.230 10:35:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.230 10:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.230 ************************************ 00:08:05.230 END TEST bdev_bounds 00:08:05.230 ************************************ 00:08:05.230 10:35:35 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:05.230 10:35:35 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:05.230 10:35:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.230 10:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.230 ************************************ 00:08:05.230 START TEST bdev_nbd 00:08:05.230 ************************************ 00:08:05.230 10:35:35 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:05.230 10:35:35 -- bdev/blockdev.sh@298 -- # uname -s 00:08:05.230 10:35:35 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:05.230 10:35:35 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.230 10:35:35 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:05.230 10:35:35 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:05.230 10:35:35 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:05.230 10:35:35 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:08:05.230 10:35:35 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:05.230 10:35:35 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:05.230 10:35:35 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:05.230 10:35:35 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:08:05.230 10:35:35 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:05.230 10:35:35 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:05.230 10:35:35 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:05.230 10:35:35 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:05.230 10:35:35 -- bdev/blockdev.sh@316 -- # nbd_pid=62069 00:08:05.230 10:35:35 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:05.230 10:35:35 -- bdev/blockdev.sh@318 -- # waitforlisten 62069 /var/tmp/spdk-nbd.sock 00:08:05.230 10:35:35 -- common/autotest_common.sh@829 -- # '[' -z 62069 ']' 00:08:05.230 10:35:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:05.230 10:35:35 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:05.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:05.230 10:35:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.230 10:35:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:05.230 10:35:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.230 10:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.230 [2024-12-03 10:35:35.837187] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:05.230 [2024-12-03 10:35:35.837295] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.488 [2024-12-03 10:35:35.983743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.745 [2024-12-03 10:35:36.173000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.146 10:35:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.146 10:35:37 -- common/autotest_common.sh@862 -- # return 0 00:08:07.146 10:35:37 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@24 -- # local i 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:07.146 10:35:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:07.146 10:35:37 -- common/autotest_common.sh@867 -- # local i 00:08:07.146 10:35:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:07.146 10:35:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:07.146 10:35:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:07.146 10:35:37 -- common/autotest_common.sh@871 -- # break 00:08:07.146 10:35:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:07.146 10:35:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:07.146 10:35:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.146 1+0 records in 00:08:07.146 1+0 records out 00:08:07.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101994 s, 4.0 MB/s 00:08:07.146 10:35:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.146 10:35:37 -- common/autotest_common.sh@884 -- # size=4096 00:08:07.146 10:35:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.146 10:35:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:07.146 10:35:37 -- common/autotest_common.sh@887 -- # return 0 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:07.146 10:35:37 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:07.405 10:35:37 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:07.405 10:35:37 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:07.405 10:35:37 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:07.405 10:35:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:07.405 10:35:37 -- common/autotest_common.sh@867 -- # local i 00:08:07.405 10:35:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:07.405 10:35:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:07.405 10:35:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:07.405 10:35:37 -- common/autotest_common.sh@871 -- # break 00:08:07.405 10:35:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:07.405 10:35:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:07.405 10:35:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.405 1+0 records in 00:08:07.405 1+0 records out 00:08:07.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126032 s, 3.2 MB/s 00:08:07.405 10:35:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.405 10:35:37 -- common/autotest_common.sh@884 -- # size=4096 00:08:07.405 10:35:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.405 10:35:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:07.405 10:35:37 -- common/autotest_common.sh@887 -- # return 0 00:08:07.405 10:35:37 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:07.405 10:35:37 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:07.405 10:35:37 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:07.405 10:35:37 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:07.405 10:35:37 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:07.405 10:35:37 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:07.405 10:35:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:07.405 10:35:37 -- common/autotest_common.sh@867 -- # local i 00:08:07.405 10:35:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:07.405 10:35:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:07.405 10:35:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:07.405 10:35:38 -- common/autotest_common.sh@871 -- # break 00:08:07.405 10:35:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:07.405 10:35:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:07.405 10:35:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.405 1+0 records in 00:08:07.405 1+0 records out 00:08:07.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000837347 s, 4.9 MB/s 00:08:07.405 10:35:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.405 10:35:38 -- common/autotest_common.sh@884 -- # size=4096 00:08:07.405 10:35:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.405 10:35:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:07.405 10:35:38 -- common/autotest_common.sh@887 -- # return 0 00:08:07.405 10:35:38 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:07.405 10:35:38 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:07.405 10:35:38 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:07.662 10:35:38 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:07.662 10:35:38 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:07.662 10:35:38 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:07.662 10:35:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:07.662 10:35:38 -- common/autotest_common.sh@867 -- # local i 00:08:07.662 10:35:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:07.662 10:35:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:07.662 10:35:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:07.662 10:35:38 -- common/autotest_common.sh@871 -- # break 00:08:07.662 10:35:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:07.662 10:35:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:07.662 10:35:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.662 1+0 records in 00:08:07.662 1+0 records out 00:08:07.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551485 s, 7.4 MB/s 00:08:07.662 10:35:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.662 10:35:38 -- common/autotest_common.sh@884 -- # size=4096 00:08:07.662 10:35:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.662 10:35:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:07.662 10:35:38 -- common/autotest_common.sh@887 -- # return 0 00:08:07.662 10:35:38 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:07.662 10:35:38 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:07.662 10:35:38 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:07.920 10:35:38 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:07.920 10:35:38 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:07.920 10:35:38 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:07.921 10:35:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:07.921 10:35:38 -- common/autotest_common.sh@867 -- # local i 00:08:07.921 10:35:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:07.921 10:35:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:07.921 10:35:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:07.921 10:35:38 -- common/autotest_common.sh@871 -- # break 00:08:07.921 10:35:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:07.921 10:35:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:07.921 10:35:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.921 1+0 records in 00:08:07.921 1+0 records out 00:08:07.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105229 s, 3.9 MB/s 00:08:07.921 10:35:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.921 10:35:38 -- common/autotest_common.sh@884 -- # size=4096 00:08:07.921 10:35:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.921 10:35:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:07.921 10:35:38 -- common/autotest_common.sh@887 -- # return 0 00:08:07.921 10:35:38 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:07.921 10:35:38 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:07.921 10:35:38 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:08.178 10:35:38 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:08.178 10:35:38 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:08.178 10:35:38 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:08.178 10:35:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:08.178 10:35:38 -- common/autotest_common.sh@867 -- # local i 00:08:08.178 10:35:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:08.178 10:35:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:08.178 10:35:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:08.178 10:35:38 -- common/autotest_common.sh@871 -- # break 00:08:08.178 10:35:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:08.178 10:35:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:08.178 10:35:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.178 1+0 records in 00:08:08.178 1+0 records out 00:08:08.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620455 s, 6.6 MB/s 00:08:08.178 10:35:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.178 10:35:38 -- common/autotest_common.sh@884 -- # size=4096 00:08:08.178 10:35:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.178 10:35:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:08.178 10:35:38 -- common/autotest_common.sh@887 -- # return 0 00:08:08.178 10:35:38 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:08.178 10:35:38 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:08.178 10:35:38 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:08.436 10:35:38 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:08.436 10:35:38 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:08.436 10:35:38 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:08.436 10:35:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:08.436 10:35:38 -- common/autotest_common.sh@867 -- # local i 00:08:08.436 10:35:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:08.436 10:35:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:08.436 10:35:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:08.436 10:35:38 -- common/autotest_common.sh@871 -- # break 00:08:08.436 10:35:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:08.436 10:35:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:08.436 10:35:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.436 1+0 records in 00:08:08.436 1+0 records out 00:08:08.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702756 s, 5.8 MB/s 00:08:08.436 10:35:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.436 10:35:38 -- common/autotest_common.sh@884 -- # size=4096 00:08:08.436 10:35:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.436 10:35:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:08.436 10:35:38 -- common/autotest_common.sh@887 -- # return 0 00:08:08.436 10:35:38 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:08.436 10:35:38 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:08.436 10:35:38 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.694 10:35:39 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:08.694 { 00:08:08.694 "nbd_device": "/dev/nbd0", 00:08:08.694 "bdev_name": "Nvme0n1p1" 00:08:08.694 }, 00:08:08.694 { 00:08:08.694 "nbd_device": "/dev/nbd1", 00:08:08.694 "bdev_name": "Nvme0n1p2" 00:08:08.694 }, 00:08:08.694 { 00:08:08.694 "nbd_device": "/dev/nbd2", 00:08:08.694 "bdev_name": "Nvme1n1" 00:08:08.694 }, 00:08:08.694 { 00:08:08.694 "nbd_device": "/dev/nbd3", 00:08:08.694 "bdev_name": "Nvme2n1" 00:08:08.694 }, 00:08:08.694 { 00:08:08.694 "nbd_device": "/dev/nbd4", 00:08:08.694 "bdev_name": "Nvme2n2" 00:08:08.694 }, 00:08:08.694 { 00:08:08.694 "nbd_device": "/dev/nbd5", 00:08:08.694 "bdev_name": "Nvme2n3" 00:08:08.694 }, 00:08:08.694 { 00:08:08.694 "nbd_device": "/dev/nbd6", 00:08:08.694 "bdev_name": "Nvme3n1" 00:08:08.694 } 00:08:08.694 ]' 00:08:08.694 10:35:39 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:08.694 10:35:39 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:08.694 10:35:39 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:08.694 { 00:08:08.694 "nbd_device": "/dev/nbd0", 00:08:08.694 "bdev_name": "Nvme0n1p1" 00:08:08.694 }, 00:08:08.694 { 00:08:08.694 "nbd_device": "/dev/nbd1", 00:08:08.694 "bdev_name": "Nvme0n1p2" 00:08:08.694 }, 00:08:08.694 { 00:08:08.694 "nbd_device": "/dev/nbd2", 00:08:08.694 "bdev_name": "Nvme1n1" 00:08:08.694 }, 00:08:08.694 { 00:08:08.694 "nbd_device": "/dev/nbd3", 00:08:08.694 "bdev_name": "Nvme2n1" 00:08:08.694 }, 00:08:08.694 { 00:08:08.694 "nbd_device": "/dev/nbd4", 00:08:08.694 "bdev_name": "Nvme2n2" 00:08:08.694 }, 00:08:08.694 { 00:08:08.694 "nbd_device": "/dev/nbd5", 00:08:08.694 "bdev_name": "Nvme2n3" 00:08:08.694 }, 00:08:08.694 { 00:08:08.694 "nbd_device": "/dev/nbd6", 00:08:08.694 "bdev_name": "Nvme3n1" 00:08:08.694 } 00:08:08.694 ]' 00:08:08.694 10:35:39 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:08.694 10:35:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.694 10:35:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:08.694 10:35:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:08.694 10:35:39 -- bdev/nbd_common.sh@51 -- # local i 00:08:08.694 10:35:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.694 10:35:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@41 -- # break 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@41 -- # break 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.952 10:35:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:09.212 10:35:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:09.212 10:35:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:09.212 10:35:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:09.212 10:35:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.212 10:35:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.212 10:35:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:09.212 10:35:39 -- bdev/nbd_common.sh@41 -- # break 00:08:09.212 10:35:39 -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.212 10:35:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.212 10:35:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:09.470 10:35:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:09.470 10:35:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:09.470 10:35:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:09.470 10:35:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.470 10:35:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.470 10:35:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:09.470 10:35:39 -- bdev/nbd_common.sh@41 -- # break 00:08:09.470 10:35:39 -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.470 10:35:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.470 10:35:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:09.726 10:35:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:09.726 10:35:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:09.726 10:35:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:09.726 10:35:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.726 10:35:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.726 10:35:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:09.726 10:35:40 -- bdev/nbd_common.sh@41 -- # break 00:08:09.726 10:35:40 -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.726 10:35:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.726 10:35:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@41 -- # break 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@41 -- # break 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.984 10:35:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@65 -- # true 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@65 -- # count=0 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@122 -- # count=0 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@127 -- # return 0 00:08:10.242 10:35:40 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@12 -- # local i 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:10.242 10:35:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:10.499 /dev/nbd0 00:08:10.499 10:35:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:10.499 10:35:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:10.499 10:35:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:10.499 10:35:41 -- common/autotest_common.sh@867 -- # local i 00:08:10.499 10:35:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:10.499 10:35:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:10.499 10:35:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:10.499 10:35:41 -- common/autotest_common.sh@871 -- # break 00:08:10.499 10:35:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:10.499 10:35:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:10.499 10:35:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.499 1+0 records in 00:08:10.499 1+0 records out 00:08:10.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000808404 s, 5.1 MB/s 00:08:10.499 10:35:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.499 10:35:41 -- common/autotest_common.sh@884 -- # size=4096 00:08:10.499 10:35:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.499 10:35:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:10.500 10:35:41 -- common/autotest_common.sh@887 -- # return 0 00:08:10.500 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:10.500 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:10.500 10:35:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:10.756 /dev/nbd1 00:08:10.756 10:35:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:10.756 10:35:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:10.756 10:35:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:10.756 10:35:41 -- common/autotest_common.sh@867 -- # local i 00:08:10.756 10:35:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:10.756 10:35:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:10.756 10:35:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:10.756 10:35:41 -- common/autotest_common.sh@871 -- # break 00:08:10.756 10:35:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:10.756 10:35:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:10.756 10:35:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.756 1+0 records in 00:08:10.756 1+0 records out 00:08:10.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000876599 s, 4.7 MB/s 00:08:10.756 10:35:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.756 10:35:41 -- common/autotest_common.sh@884 -- # size=4096 00:08:10.756 10:35:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.756 10:35:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:10.756 10:35:41 -- common/autotest_common.sh@887 -- # return 0 00:08:10.756 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:10.756 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:10.756 10:35:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:11.014 /dev/nbd10 00:08:11.014 10:35:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:11.014 10:35:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:11.014 10:35:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:11.014 10:35:41 -- common/autotest_common.sh@867 -- # local i 00:08:11.014 10:35:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:11.014 10:35:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:11.014 10:35:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:11.014 10:35:41 -- common/autotest_common.sh@871 -- # break 00:08:11.014 10:35:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:11.014 10:35:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:11.014 10:35:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:11.014 1+0 records in 00:08:11.014 1+0 records out 00:08:11.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606684 s, 6.8 MB/s 00:08:11.014 10:35:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.014 10:35:41 -- common/autotest_common.sh@884 -- # size=4096 00:08:11.014 10:35:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.014 10:35:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:11.014 10:35:41 -- common/autotest_common.sh@887 -- # return 0 00:08:11.014 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.014 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:11.014 10:35:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:11.272 /dev/nbd11 00:08:11.272 10:35:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:11.272 10:35:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:11.272 10:35:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:11.272 10:35:41 -- common/autotest_common.sh@867 -- # local i 00:08:11.272 10:35:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:11.272 10:35:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:11.272 10:35:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:11.272 10:35:41 -- common/autotest_common.sh@871 -- # break 00:08:11.272 10:35:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:11.272 10:35:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:11.272 10:35:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:11.272 1+0 records in 00:08:11.272 1+0 records out 00:08:11.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606918 s, 6.7 MB/s 00:08:11.272 10:35:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.272 10:35:41 -- common/autotest_common.sh@884 -- # size=4096 00:08:11.272 10:35:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.272 10:35:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:11.272 10:35:41 -- common/autotest_common.sh@887 -- # return 0 00:08:11.272 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.272 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:11.272 10:35:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:11.530 /dev/nbd12 00:08:11.530 10:35:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:11.530 10:35:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:11.530 10:35:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:11.530 10:35:41 -- common/autotest_common.sh@867 -- # local i 00:08:11.530 10:35:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:11.530 10:35:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:11.530 10:35:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:11.530 10:35:41 -- common/autotest_common.sh@871 -- # break 00:08:11.530 10:35:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:11.530 10:35:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:11.530 10:35:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:11.530 1+0 records in 00:08:11.530 1+0 records out 00:08:11.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107187 s, 3.8 MB/s 00:08:11.530 10:35:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.530 10:35:41 -- common/autotest_common.sh@884 -- # size=4096 00:08:11.530 10:35:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.530 10:35:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:11.530 10:35:41 -- common/autotest_common.sh@887 -- # return 0 00:08:11.530 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.530 10:35:41 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:11.530 10:35:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:11.530 /dev/nbd13 00:08:11.530 10:35:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:11.530 10:35:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:11.530 10:35:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:11.530 10:35:42 -- common/autotest_common.sh@867 -- # local i 00:08:11.530 10:35:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:11.530 10:35:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:11.530 10:35:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:11.530 10:35:42 -- common/autotest_common.sh@871 -- # break 00:08:11.530 10:35:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:11.530 10:35:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:11.530 10:35:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:11.530 1+0 records in 00:08:11.530 1+0 records out 00:08:11.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00099595 s, 4.1 MB/s 00:08:11.530 10:35:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.790 10:35:42 -- common/autotest_common.sh@884 -- # size=4096 00:08:11.790 10:35:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.790 10:35:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:11.790 10:35:42 -- common/autotest_common.sh@887 -- # return 0 00:08:11.790 10:35:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.790 10:35:42 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:11.790 10:35:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:11.790 /dev/nbd14 00:08:11.790 10:35:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:11.790 10:35:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:11.790 10:35:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:11.791 10:35:42 -- common/autotest_common.sh@867 -- # local i 00:08:11.791 10:35:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:11.791 10:35:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:11.791 10:35:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:11.791 10:35:42 -- common/autotest_common.sh@871 -- # break 00:08:11.791 10:35:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:11.791 10:35:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:11.791 10:35:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:11.791 1+0 records in 00:08:11.791 1+0 records out 00:08:11.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119395 s, 3.4 MB/s 00:08:11.791 10:35:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.791 10:35:42 -- common/autotest_common.sh@884 -- # size=4096 00:08:11.791 10:35:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.791 10:35:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:11.791 10:35:42 -- common/autotest_common.sh@887 -- # return 0 00:08:11.791 10:35:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.791 10:35:42 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:11.791 10:35:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:11.791 10:35:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.791 10:35:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:12.051 10:35:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:12.051 { 00:08:12.051 "nbd_device": "/dev/nbd0", 00:08:12.051 "bdev_name": "Nvme0n1p1" 00:08:12.051 }, 00:08:12.051 { 00:08:12.051 "nbd_device": "/dev/nbd1", 00:08:12.051 "bdev_name": "Nvme0n1p2" 00:08:12.051 }, 00:08:12.051 { 00:08:12.051 "nbd_device": "/dev/nbd10", 00:08:12.051 "bdev_name": "Nvme1n1" 00:08:12.051 }, 00:08:12.051 { 00:08:12.051 "nbd_device": "/dev/nbd11", 00:08:12.051 "bdev_name": "Nvme2n1" 00:08:12.051 }, 00:08:12.051 { 00:08:12.051 "nbd_device": "/dev/nbd12", 00:08:12.051 "bdev_name": "Nvme2n2" 00:08:12.051 }, 00:08:12.051 { 00:08:12.051 "nbd_device": "/dev/nbd13", 00:08:12.051 "bdev_name": "Nvme2n3" 00:08:12.051 }, 00:08:12.051 { 00:08:12.051 "nbd_device": "/dev/nbd14", 00:08:12.051 "bdev_name": "Nvme3n1" 00:08:12.051 } 00:08:12.051 ]' 00:08:12.051 10:35:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:12.051 10:35:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:12.051 { 00:08:12.051 "nbd_device": "/dev/nbd0", 00:08:12.051 "bdev_name": "Nvme0n1p1" 00:08:12.051 }, 00:08:12.051 { 00:08:12.051 "nbd_device": "/dev/nbd1", 00:08:12.051 "bdev_name": "Nvme0n1p2" 00:08:12.051 }, 00:08:12.051 { 00:08:12.051 "nbd_device": "/dev/nbd10", 00:08:12.051 "bdev_name": "Nvme1n1" 00:08:12.051 }, 00:08:12.051 { 00:08:12.051 "nbd_device": "/dev/nbd11", 00:08:12.051 "bdev_name": "Nvme2n1" 00:08:12.051 }, 00:08:12.051 { 00:08:12.051 "nbd_device": "/dev/nbd12", 00:08:12.051 "bdev_name": "Nvme2n2" 00:08:12.051 }, 00:08:12.051 { 00:08:12.051 "nbd_device": "/dev/nbd13", 00:08:12.051 "bdev_name": "Nvme2n3" 00:08:12.051 }, 00:08:12.052 { 00:08:12.052 "nbd_device": "/dev/nbd14", 00:08:12.052 "bdev_name": "Nvme3n1" 00:08:12.052 } 00:08:12.052 ]' 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:12.052 /dev/nbd1 00:08:12.052 /dev/nbd10 00:08:12.052 /dev/nbd11 00:08:12.052 /dev/nbd12 00:08:12.052 /dev/nbd13 00:08:12.052 /dev/nbd14' 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:12.052 /dev/nbd1 00:08:12.052 /dev/nbd10 00:08:12.052 /dev/nbd11 00:08:12.052 /dev/nbd12 00:08:12.052 /dev/nbd13 00:08:12.052 /dev/nbd14' 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@65 -- # count=7 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@66 -- # echo 7 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@95 -- # count=7 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:12.052 256+0 records in 00:08:12.052 256+0 records out 00:08:12.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443952 s, 236 MB/s 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.052 10:35:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:12.313 256+0 records in 00:08:12.313 256+0 records out 00:08:12.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.261455 s, 4.0 MB/s 00:08:12.313 10:35:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.313 10:35:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:12.574 256+0 records in 00:08:12.574 256+0 records out 00:08:12.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.254655 s, 4.1 MB/s 00:08:12.574 10:35:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.574 10:35:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:12.836 256+0 records in 00:08:12.836 256+0 records out 00:08:12.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.209109 s, 5.0 MB/s 00:08:12.836 10:35:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.836 10:35:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:13.098 256+0 records in 00:08:13.098 256+0 records out 00:08:13.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.256927 s, 4.1 MB/s 00:08:13.098 10:35:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:13.098 10:35:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:13.359 256+0 records in 00:08:13.359 256+0 records out 00:08:13.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.258986 s, 4.0 MB/s 00:08:13.359 10:35:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:13.359 10:35:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:13.620 256+0 records in 00:08:13.620 256+0 records out 00:08:13.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.260356 s, 4.0 MB/s 00:08:13.620 10:35:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:13.620 10:35:44 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:13.882 256+0 records in 00:08:13.882 256+0 records out 00:08:13.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.218158 s, 4.8 MB/s 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@51 -- # local i 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.882 10:35:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:14.143 10:35:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:14.143 10:35:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:14.143 10:35:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:14.143 10:35:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.143 10:35:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.143 10:35:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:14.143 10:35:44 -- bdev/nbd_common.sh@41 -- # break 00:08:14.143 10:35:44 -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.143 10:35:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.143 10:35:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:14.404 10:35:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:14.404 10:35:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:14.404 10:35:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:14.404 10:35:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.404 10:35:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.404 10:35:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:14.404 10:35:44 -- bdev/nbd_common.sh@41 -- # break 00:08:14.404 10:35:44 -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.404 10:35:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.404 10:35:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:14.673 10:35:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:14.673 10:35:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:14.673 10:35:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:14.673 10:35:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.673 10:35:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.673 10:35:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:14.673 10:35:45 -- bdev/nbd_common.sh@41 -- # break 00:08:14.673 10:35:45 -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.673 10:35:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.673 10:35:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:14.941 10:35:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:14.941 10:35:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:14.941 10:35:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:14.941 10:35:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.941 10:35:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.941 10:35:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:14.941 10:35:45 -- bdev/nbd_common.sh@41 -- # break 00:08:14.941 10:35:45 -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.941 10:35:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.941 10:35:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@41 -- # break 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@41 -- # break 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.202 10:35:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:15.464 10:35:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:15.464 10:35:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:15.464 10:35:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:15.464 10:35:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.464 10:35:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.464 10:35:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:15.464 10:35:46 -- bdev/nbd_common.sh@41 -- # break 00:08:15.464 10:35:46 -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.464 10:35:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:15.464 10:35:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.464 10:35:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@65 -- # true 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@65 -- # count=0 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@104 -- # count=0 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@109 -- # return 0 00:08:15.726 10:35:46 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:15.726 10:35:46 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:15.986 malloc_lvol_verify 00:08:15.986 10:35:46 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:16.248 99ba7b43-cf00-4ebe-b8dc-fe5d3b8a8249 00:08:16.248 10:35:46 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:16.510 480f4d5f-5ebf-4597-8ea2-e5c306d132d0 00:08:16.510 10:35:46 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:16.510 /dev/nbd0 00:08:16.776 10:35:47 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:16.776 mke2fs 1.47.0 (5-Feb-2023) 00:08:16.776 Discarding device blocks: 0/4096 done 00:08:16.776 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:16.776 00:08:16.776 Allocating group tables: 0/1 done 00:08:16.776 Writing inode tables: 0/1 done 00:08:16.776 Creating journal (1024 blocks): done 00:08:16.776 Writing superblocks and filesystem accounting information: 0/1 done 00:08:16.776 00:08:16.776 10:35:47 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:16.776 10:35:47 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:16.776 10:35:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.776 10:35:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:16.776 10:35:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:16.776 10:35:47 -- bdev/nbd_common.sh@51 -- # local i 00:08:16.776 10:35:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.776 10:35:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:16.776 10:35:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:17.042 10:35:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:17.042 10:35:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:17.042 10:35:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.042 10:35:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.042 10:35:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:17.042 10:35:47 -- bdev/nbd_common.sh@41 -- # break 00:08:17.042 10:35:47 -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.042 10:35:47 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:17.042 10:35:47 -- bdev/nbd_common.sh@147 -- # return 0 00:08:17.042 10:35:47 -- bdev/blockdev.sh@324 -- # killprocess 62069 00:08:17.042 10:35:47 -- common/autotest_common.sh@936 -- # '[' -z 62069 ']' 00:08:17.042 10:35:47 -- common/autotest_common.sh@940 -- # kill -0 62069 00:08:17.042 10:35:47 -- common/autotest_common.sh@941 -- # uname 00:08:17.042 10:35:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:17.042 10:35:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62069 00:08:17.042 killing process with pid 62069 00:08:17.042 10:35:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:17.042 10:35:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:17.042 10:35:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62069' 00:08:17.042 10:35:47 -- common/autotest_common.sh@955 -- # kill 62069 00:08:17.042 10:35:47 -- common/autotest_common.sh@960 -- # wait 62069 00:08:19.576 10:35:49 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:19.576 00:08:19.576 real 0m14.188s 00:08:19.576 user 0m17.937s 00:08:19.576 sys 0m4.414s 00:08:19.576 10:35:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.576 ************************************ 00:08:19.576 END TEST bdev_nbd 00:08:19.576 ************************************ 00:08:19.576 10:35:49 -- common/autotest_common.sh@10 -- # set +x 00:08:19.576 10:35:50 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:19.576 10:35:50 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:08:19.576 skipping fio tests on NVMe due to multi-ns failures. 00:08:19.576 10:35:50 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:08:19.576 10:35:50 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:19.576 10:35:50 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:19.576 10:35:50 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:19.576 10:35:50 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:19.576 10:35:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.576 10:35:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.576 ************************************ 00:08:19.576 START TEST bdev_verify 00:08:19.576 ************************************ 00:08:19.576 10:35:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:19.576 [2024-12-03 10:35:50.072007] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:19.576 [2024-12-03 10:35:50.072132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62509 ] 00:08:19.835 [2024-12-03 10:35:50.224307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:19.835 [2024-12-03 10:35:50.444778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.835 [2024-12-03 10:35:50.444926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.770 Running I/O for 5 seconds... 00:08:26.040 00:08:26.040 Latency(us) 00:08:26.040 [2024-12-03T10:35:56.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.040 [2024-12-03T10:35:56.653Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.040 Verification LBA range: start 0x0 length 0x5e800 00:08:26.040 Nvme0n1p1 : 5.05 2393.63 9.35 0.00 0.00 53316.36 9376.69 62914.56 00:08:26.040 [2024-12-03T10:35:56.653Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.040 Verification LBA range: start 0x5e800 length 0x5e800 00:08:26.040 Nvme0n1p1 : 5.05 2360.99 9.22 0.00 0.00 54049.65 6452.78 63317.86 00:08:26.040 [2024-12-03T10:35:56.653Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.040 Verification LBA range: start 0x0 length 0x5e7ff 00:08:26.040 Nvme0n1p2 : 5.06 2392.15 9.34 0.00 0.00 53296.53 11645.24 60091.47 00:08:26.040 [2024-12-03T10:35:56.653Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.040 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:08:26.040 Nvme0n1p2 : 5.06 2364.55 9.24 0.00 0.00 53896.62 4688.34 54445.29 00:08:26.040 [2024-12-03T10:35:56.653Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.040 Verification LBA range: start 0x0 length 0xa0000 00:08:26.040 Nvme1n1 : 5.06 2390.98 9.34 0.00 0.00 53204.75 12451.84 53638.70 00:08:26.040 [2024-12-03T10:35:56.653Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.040 Verification LBA range: start 0xa0000 length 0xa0000 00:08:26.040 Nvme1n1 : 5.06 2363.94 9.23 0.00 0.00 53865.14 5293.29 53638.70 00:08:26.040 [2024-12-03T10:35:56.653Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.040 Verification LBA range: start 0x0 length 0x80000 00:08:26.040 Nvme2n1 : 5.06 2395.90 9.36 0.00 0.00 53073.92 2697.06 53638.70 00:08:26.040 [2024-12-03T10:35:56.653Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.040 Verification LBA range: start 0x80000 length 0x80000 00:08:26.040 Nvme2n1 : 5.07 2361.89 9.23 0.00 0.00 53808.11 9074.22 51017.26 00:08:26.040 [2024-12-03T10:35:56.653Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.040 Verification LBA range: start 0x0 length 0x80000 00:08:26.040 Nvme2n2 : 5.07 2393.88 9.35 0.00 0.00 53019.37 6150.30 53235.40 00:08:26.040 [2024-12-03T10:35:56.653Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.040 Verification LBA range: start 0x80000 length 0x80000 00:08:26.040 Nvme2n2 : 5.07 2359.85 9.22 0.00 0.00 53779.14 12703.90 51622.20 00:08:26.040 [2024-12-03T10:35:56.653Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.040 Verification LBA range: start 0x0 length 0x80000 00:08:26.040 Nvme2n3 : 5.07 2391.84 9.34 0.00 0.00 52988.30 9679.16 53638.70 00:08:26.040 [2024-12-03T10:35:56.653Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.040 Verification LBA range: start 0x80000 length 0x80000 00:08:26.040 Nvme2n3 : 5.08 2357.95 9.21 0.00 0.00 53747.37 15930.29 52832.10 00:08:26.040 [2024-12-03T10:35:56.653Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.040 Verification LBA range: start 0x0 length 0x20000 00:08:26.040 Nvme3n1 : 5.08 2389.95 9.34 0.00 0.00 52963.65 13107.20 53638.70 00:08:26.040 [2024-12-03T10:35:56.653Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.040 Verification LBA range: start 0x20000 length 0x20000 00:08:26.040 Nvme3n1 : 5.08 2356.23 9.20 0.00 0.00 53720.52 14216.27 50815.61 00:08:26.040 [2024-12-03T10:35:56.653Z] =================================================================================================================== 00:08:26.040 [2024-12-03T10:35:56.653Z] Total : 33273.73 129.98 0.00 0.00 53478.27 2697.06 63317.86 00:08:30.237 00:08:30.237 real 0m10.568s 00:08:30.237 user 0m17.591s 00:08:30.237 sys 0m0.338s 00:08:30.237 10:36:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:30.237 ************************************ 00:08:30.237 END TEST bdev_verify 00:08:30.237 ************************************ 00:08:30.237 10:36:00 -- common/autotest_common.sh@10 -- # set +x 00:08:30.237 10:36:00 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:30.237 10:36:00 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:30.237 10:36:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.237 10:36:00 -- common/autotest_common.sh@10 -- # set +x 00:08:30.237 ************************************ 00:08:30.237 START TEST bdev_verify_big_io 00:08:30.237 ************************************ 00:08:30.237 10:36:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:30.237 [2024-12-03 10:36:00.729207] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:30.237 [2024-12-03 10:36:00.729346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62639 ] 00:08:30.498 [2024-12-03 10:36:00.885405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:30.760 [2024-12-03 10:36:01.201264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.760 [2024-12-03 10:36:01.201390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.708 Running I/O for 5 seconds... 00:08:36.983 00:08:36.983 Latency(us) 00:08:36.983 [2024-12-03T10:36:07.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.983 [2024-12-03T10:36:07.596Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:36.983 Verification LBA range: start 0x0 length 0x5e80 00:08:36.983 Nvme0n1p1 : 5.38 241.21 15.08 0.00 0.00 517797.44 45976.02 774333.05 00:08:36.983 [2024-12-03T10:36:07.596Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:36.983 Verification LBA range: start 0x5e80 length 0x5e80 00:08:36.983 Nvme0n1p1 : 5.37 259.01 16.19 0.00 0.00 485630.49 42951.29 751748.33 00:08:36.983 [2024-12-03T10:36:07.596Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:36.983 Verification LBA range: start 0x0 length 0x5e7f 00:08:36.983 Nvme0n1p2 : 5.41 249.19 15.57 0.00 0.00 498642.00 33877.07 713031.68 00:08:36.983 [2024-12-03T10:36:07.596Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:36.983 Verification LBA range: start 0x5e7f length 0x5e7f 00:08:36.983 Nvme0n1p2 : 5.37 258.92 16.18 0.00 0.00 480351.23 43354.58 687220.58 00:08:36.983 [2024-12-03T10:36:07.596Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:36.983 Verification LBA range: start 0x0 length 0xa000 00:08:36.983 Nvme1n1 : 5.42 249.12 15.57 0.00 0.00 491491.15 34482.02 651730.31 00:08:36.983 [2024-12-03T10:36:07.596Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:36.983 Verification LBA range: start 0xa000 length 0xa000 00:08:36.983 Nvme1n1 : 5.37 258.85 16.18 0.00 0.00 474124.46 43757.88 645277.54 00:08:36.983 [2024-12-03T10:36:07.596Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:36.983 Verification LBA range: start 0x0 length 0x8000 00:08:36.983 Nvme2n1 : 5.42 249.04 15.57 0.00 0.00 484488.29 35490.26 593655.34 00:08:36.983 [2024-12-03T10:36:07.596Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:36.983 Verification LBA range: start 0x8000 length 0x8000 00:08:36.983 Nvme2n1 : 5.41 264.33 16.52 0.00 0.00 459077.46 33877.07 590428.95 00:08:36.983 [2024-12-03T10:36:07.596Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:36.983 Verification LBA range: start 0x0 length 0x8000 00:08:36.983 Nvme2n2 : 5.44 255.67 15.98 0.00 0.00 466242.01 21979.77 558165.07 00:08:36.983 [2024-12-03T10:36:07.596Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:36.983 Verification LBA range: start 0x8000 length 0x8000 00:08:36.983 Nvme2n2 : 5.41 264.25 16.52 0.00 0.00 452730.87 34280.37 535580.36 00:08:36.983 [2024-12-03T10:36:07.596Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:36.983 Verification LBA range: start 0x0 length 0x8000 00:08:36.983 Nvme2n3 : 5.45 261.62 16.35 0.00 0.00 450128.00 8015.56 948557.98 00:08:36.983 [2024-12-03T10:36:07.596Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:36.983 Verification LBA range: start 0x8000 length 0x8000 00:08:36.983 Nvme2n3 : 5.44 280.08 17.50 0.00 0.00 423479.69 20164.92 577523.40 00:08:36.983 [2024-12-03T10:36:07.596Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:36.983 Verification LBA range: start 0x0 length 0x2000 00:08:36.983 Nvme3n1 : 5.48 293.96 18.37 0.00 0.00 395809.27 463.16 955010.76 00:08:36.983 [2024-12-03T10:36:07.596Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:36.983 Verification LBA range: start 0x2000 length 0x2000 00:08:36.983 Nvme3n1 : 5.45 295.42 18.46 0.00 0.00 396760.13 3276.80 583976.17 00:08:36.983 [2024-12-03T10:36:07.596Z] =================================================================================================================== 00:08:36.983 [2024-12-03T10:36:07.596Z] Total : 3680.67 230.04 0.00 0.00 460415.60 463.16 955010.76 00:08:39.519 ************************************ 00:08:39.519 END TEST bdev_verify_big_io 00:08:39.520 ************************************ 00:08:39.520 00:08:39.520 real 0m9.075s 00:08:39.520 user 0m15.387s 00:08:39.520 sys 0m0.412s 00:08:39.520 10:36:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:39.520 10:36:09 -- common/autotest_common.sh@10 -- # set +x 00:08:39.520 10:36:09 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:39.520 10:36:09 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:39.520 10:36:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:39.520 10:36:09 -- common/autotest_common.sh@10 -- # set +x 00:08:39.520 ************************************ 00:08:39.520 START TEST bdev_write_zeroes 00:08:39.520 ************************************ 00:08:39.520 10:36:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:39.520 [2024-12-03 10:36:09.854329] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:39.520 [2024-12-03 10:36:09.854439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62750 ] 00:08:39.520 [2024-12-03 10:36:10.002435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.777 [2024-12-03 10:36:10.205272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.344 Running I/O for 1 seconds... 00:08:41.282 00:08:41.282 Latency(us) 00:08:41.282 [2024-12-03T10:36:11.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.282 [2024-12-03T10:36:11.895Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:41.282 Nvme0n1p1 : 1.02 8106.10 31.66 0.00 0.00 15683.92 6351.95 33070.47 00:08:41.282 [2024-12-03T10:36:11.895Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:41.282 Nvme0n1p2 : 1.02 8087.71 31.59 0.00 0.00 15665.16 6427.57 37910.06 00:08:41.282 [2024-12-03T10:36:11.895Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:41.282 Nvme1n1 : 1.01 8453.18 33.02 0.00 0.00 15071.17 7158.55 25306.98 00:08:41.282 [2024-12-03T10:36:11.895Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:41.282 Nvme2n1 : 1.02 8442.96 32.98 0.00 0.00 15067.23 7561.85 25811.10 00:08:41.282 [2024-12-03T10:36:11.895Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:41.282 Nvme2n2 : 1.02 8433.30 32.94 0.00 0.00 15031.98 7763.50 25710.28 00:08:41.282 [2024-12-03T10:36:11.895Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:41.282 Nvme2n3 : 1.02 8426.76 32.92 0.00 0.00 14978.55 5822.62 25811.10 00:08:41.282 [2024-12-03T10:36:11.895Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:41.282 Nvme3n1 : 1.02 8391.82 32.78 0.00 0.00 15025.51 5217.67 25710.28 00:08:41.282 [2024-12-03T10:36:11.895Z] =================================================================================================================== 00:08:41.282 [2024-12-03T10:36:11.895Z] Total : 58341.82 227.90 0.00 0.00 15212.55 5217.67 37910.06 00:08:42.227 00:08:42.227 real 0m2.875s 00:08:42.227 user 0m2.553s 00:08:42.227 sys 0m0.206s 00:08:42.227 10:36:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.227 ************************************ 00:08:42.227 10:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:42.227 END TEST bdev_write_zeroes 00:08:42.227 ************************************ 00:08:42.227 10:36:12 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:42.227 10:36:12 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:42.227 10:36:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.227 10:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:42.227 ************************************ 00:08:42.227 START TEST bdev_json_nonenclosed 00:08:42.227 ************************************ 00:08:42.227 10:36:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:42.227 [2024-12-03 10:36:12.813874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:42.227 [2024-12-03 10:36:12.814021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62803 ] 00:08:42.489 [2024-12-03 10:36:12.967546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.749 [2024-12-03 10:36:13.279217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.749 [2024-12-03 10:36:13.279485] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:42.749 [2024-12-03 10:36:13.279510] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.318 00:08:43.318 real 0m0.914s 00:08:43.318 user 0m0.657s 00:08:43.318 sys 0m0.147s 00:08:43.318 10:36:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.318 ************************************ 00:08:43.318 END TEST bdev_json_nonenclosed 00:08:43.318 ************************************ 00:08:43.318 10:36:13 -- common/autotest_common.sh@10 -- # set +x 00:08:43.318 10:36:13 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:43.318 10:36:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:43.318 10:36:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.318 10:36:13 -- common/autotest_common.sh@10 -- # set +x 00:08:43.318 ************************************ 00:08:43.318 START TEST bdev_json_nonarray 00:08:43.318 ************************************ 00:08:43.318 10:36:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:43.318 [2024-12-03 10:36:13.785909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.318 [2024-12-03 10:36:13.786066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62834 ] 00:08:43.578 [2024-12-03 10:36:13.940736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.839 [2024-12-03 10:36:14.231844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.839 [2024-12-03 10:36:14.232116] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:43.839 [2024-12-03 10:36:14.232142] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.100 00:08:44.100 real 0m0.871s 00:08:44.100 user 0m0.607s 00:08:44.100 sys 0m0.155s 00:08:44.100 10:36:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:44.100 ************************************ 00:08:44.100 END TEST bdev_json_nonarray 00:08:44.100 ************************************ 00:08:44.100 10:36:14 -- common/autotest_common.sh@10 -- # set +x 00:08:44.100 10:36:14 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:08:44.100 10:36:14 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:08:44.100 10:36:14 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:44.100 10:36:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:44.100 10:36:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:44.100 10:36:14 -- common/autotest_common.sh@10 -- # set +x 00:08:44.100 ************************************ 00:08:44.100 START TEST bdev_gpt_uuid 00:08:44.100 ************************************ 00:08:44.100 10:36:14 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:08:44.100 10:36:14 -- bdev/blockdev.sh@612 -- # local bdev 00:08:44.100 10:36:14 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:08:44.100 10:36:14 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62865 00:08:44.100 10:36:14 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:44.100 10:36:14 -- bdev/blockdev.sh@47 -- # waitforlisten 62865 00:08:44.100 10:36:14 -- common/autotest_common.sh@829 -- # '[' -z 62865 ']' 00:08:44.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.100 10:36:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.100 10:36:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:44.100 10:36:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.100 10:36:14 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:44.100 10:36:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:44.100 10:36:14 -- common/autotest_common.sh@10 -- # set +x 00:08:44.359 [2024-12-03 10:36:14.730703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:44.359 [2024-12-03 10:36:14.730851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62865 ] 00:08:44.359 [2024-12-03 10:36:14.883861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.619 [2024-12-03 10:36:15.104467] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:44.619 [2024-12-03 10:36:15.104680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.003 10:36:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.003 10:36:16 -- common/autotest_common.sh@862 -- # return 0 00:08:46.003 10:36:16 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:46.003 10:36:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.003 10:36:16 -- common/autotest_common.sh@10 -- # set +x 00:08:46.003 Some configs were skipped because the RPC state that can call them passed over. 00:08:46.003 10:36:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.003 10:36:16 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:08:46.003 10:36:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.003 10:36:16 -- common/autotest_common.sh@10 -- # set +x 00:08:46.003 10:36:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.003 10:36:16 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:46.003 10:36:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.003 10:36:16 -- common/autotest_common.sh@10 -- # set +x 00:08:46.265 10:36:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.265 10:36:16 -- bdev/blockdev.sh@619 -- # bdev='[ 00:08:46.265 { 00:08:46.265 "name": "Nvme0n1p1", 00:08:46.265 "aliases": [ 00:08:46.265 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:46.265 ], 00:08:46.265 "product_name": "GPT Disk", 00:08:46.265 "block_size": 4096, 00:08:46.265 "num_blocks": 774144, 00:08:46.265 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:46.265 "md_size": 64, 00:08:46.265 "md_interleave": false, 00:08:46.265 "dif_type": 0, 00:08:46.265 "assigned_rate_limits": { 00:08:46.265 "rw_ios_per_sec": 0, 00:08:46.265 "rw_mbytes_per_sec": 0, 00:08:46.265 "r_mbytes_per_sec": 0, 00:08:46.265 "w_mbytes_per_sec": 0 00:08:46.265 }, 00:08:46.265 "claimed": false, 00:08:46.265 "zoned": false, 00:08:46.265 "supported_io_types": { 00:08:46.265 "read": true, 00:08:46.265 "write": true, 00:08:46.265 "unmap": true, 00:08:46.265 "write_zeroes": true, 00:08:46.265 "flush": true, 00:08:46.265 "reset": true, 00:08:46.265 "compare": true, 00:08:46.265 "compare_and_write": false, 00:08:46.265 "abort": true, 00:08:46.265 "nvme_admin": false, 00:08:46.265 "nvme_io": false 00:08:46.265 }, 00:08:46.265 "driver_specific": { 00:08:46.265 "gpt": { 00:08:46.265 "base_bdev": "Nvme0n1", 00:08:46.265 "offset_blocks": 256, 00:08:46.265 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:46.265 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:46.265 "partition_name": "SPDK_TEST_first" 00:08:46.265 } 00:08:46.265 } 00:08:46.265 } 00:08:46.265 ]' 00:08:46.265 10:36:16 -- bdev/blockdev.sh@620 -- # jq -r length 00:08:46.265 10:36:16 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:08:46.265 10:36:16 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:08:46.265 10:36:16 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:46.265 10:36:16 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:46.265 10:36:16 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:46.265 10:36:16 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:46.265 10:36:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.265 10:36:16 -- common/autotest_common.sh@10 -- # set +x 00:08:46.265 10:36:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.265 10:36:16 -- bdev/blockdev.sh@624 -- # bdev='[ 00:08:46.265 { 00:08:46.265 "name": "Nvme0n1p2", 00:08:46.265 "aliases": [ 00:08:46.265 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:46.265 ], 00:08:46.265 "product_name": "GPT Disk", 00:08:46.265 "block_size": 4096, 00:08:46.265 "num_blocks": 774143, 00:08:46.265 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:46.265 "md_size": 64, 00:08:46.265 "md_interleave": false, 00:08:46.265 "dif_type": 0, 00:08:46.265 "assigned_rate_limits": { 00:08:46.265 "rw_ios_per_sec": 0, 00:08:46.265 "rw_mbytes_per_sec": 0, 00:08:46.265 "r_mbytes_per_sec": 0, 00:08:46.265 "w_mbytes_per_sec": 0 00:08:46.265 }, 00:08:46.265 "claimed": false, 00:08:46.265 "zoned": false, 00:08:46.265 "supported_io_types": { 00:08:46.265 "read": true, 00:08:46.265 "write": true, 00:08:46.265 "unmap": true, 00:08:46.265 "write_zeroes": true, 00:08:46.265 "flush": true, 00:08:46.265 "reset": true, 00:08:46.265 "compare": true, 00:08:46.265 "compare_and_write": false, 00:08:46.265 "abort": true, 00:08:46.265 "nvme_admin": false, 00:08:46.265 "nvme_io": false 00:08:46.265 }, 00:08:46.265 "driver_specific": { 00:08:46.265 "gpt": { 00:08:46.265 "base_bdev": "Nvme0n1", 00:08:46.265 "offset_blocks": 774400, 00:08:46.265 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:46.265 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:46.265 "partition_name": "SPDK_TEST_second" 00:08:46.265 } 00:08:46.265 } 00:08:46.265 } 00:08:46.265 ]' 00:08:46.265 10:36:16 -- bdev/blockdev.sh@625 -- # jq -r length 00:08:46.265 10:36:16 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:08:46.265 10:36:16 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:08:46.265 10:36:16 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:46.265 10:36:16 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:46.265 10:36:16 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:46.265 10:36:16 -- bdev/blockdev.sh@629 -- # killprocess 62865 00:08:46.265 10:36:16 -- common/autotest_common.sh@936 -- # '[' -z 62865 ']' 00:08:46.265 10:36:16 -- common/autotest_common.sh@940 -- # kill -0 62865 00:08:46.265 10:36:16 -- common/autotest_common.sh@941 -- # uname 00:08:46.265 10:36:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:46.265 10:36:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62865 00:08:46.265 killing process with pid 62865 00:08:46.265 10:36:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:46.265 10:36:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:46.265 10:36:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62865' 00:08:46.265 10:36:16 -- common/autotest_common.sh@955 -- # kill 62865 00:08:46.265 10:36:16 -- common/autotest_common.sh@960 -- # wait 62865 00:08:48.181 00:08:48.181 real 0m3.838s 00:08:48.181 user 0m3.981s 00:08:48.181 sys 0m0.560s 00:08:48.181 10:36:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:48.181 ************************************ 00:08:48.181 END TEST bdev_gpt_uuid 00:08:48.181 ************************************ 00:08:48.181 10:36:18 -- common/autotest_common.sh@10 -- # set +x 00:08:48.181 10:36:18 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:08:48.181 10:36:18 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:48.181 10:36:18 -- bdev/blockdev.sh@809 -- # cleanup 00:08:48.181 10:36:18 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:48.181 10:36:18 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:48.181 10:36:18 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:08:48.181 10:36:18 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:08:48.181 10:36:18 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:08:48.181 10:36:18 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:48.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:48.443 Waiting for block devices as requested 00:08:48.702 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:48.702 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:48.702 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:48.963 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.271 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:54.271 10:36:24 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:08:54.271 10:36:24 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:08:54.271 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:54.271 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:08:54.271 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:54.271 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:08:54.271 10:36:24 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:08:54.271 00:08:54.271 real 1m6.684s 00:08:54.271 user 1m21.232s 00:08:54.271 sys 0m9.903s 00:08:54.271 10:36:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:54.271 ************************************ 00:08:54.271 END TEST blockdev_nvme_gpt 00:08:54.271 ************************************ 00:08:54.271 10:36:24 -- common/autotest_common.sh@10 -- # set +x 00:08:54.271 10:36:24 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:54.271 10:36:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:54.271 10:36:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:54.271 10:36:24 -- common/autotest_common.sh@10 -- # set +x 00:08:54.271 ************************************ 00:08:54.271 START TEST nvme 00:08:54.271 ************************************ 00:08:54.271 10:36:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:54.271 * Looking for test storage... 00:08:54.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:54.271 10:36:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:54.271 10:36:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:54.271 10:36:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:54.531 10:36:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:54.531 10:36:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:54.531 10:36:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:54.531 10:36:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:54.531 10:36:24 -- scripts/common.sh@335 -- # IFS=.-: 00:08:54.531 10:36:24 -- scripts/common.sh@335 -- # read -ra ver1 00:08:54.531 10:36:24 -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.531 10:36:24 -- scripts/common.sh@336 -- # read -ra ver2 00:08:54.531 10:36:24 -- scripts/common.sh@337 -- # local 'op=<' 00:08:54.531 10:36:24 -- scripts/common.sh@339 -- # ver1_l=2 00:08:54.531 10:36:24 -- scripts/common.sh@340 -- # ver2_l=1 00:08:54.531 10:36:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:54.531 10:36:24 -- scripts/common.sh@343 -- # case "$op" in 00:08:54.531 10:36:24 -- scripts/common.sh@344 -- # : 1 00:08:54.531 10:36:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:54.531 10:36:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.531 10:36:24 -- scripts/common.sh@364 -- # decimal 1 00:08:54.531 10:36:24 -- scripts/common.sh@352 -- # local d=1 00:08:54.531 10:36:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.531 10:36:24 -- scripts/common.sh@354 -- # echo 1 00:08:54.531 10:36:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:54.531 10:36:24 -- scripts/common.sh@365 -- # decimal 2 00:08:54.531 10:36:24 -- scripts/common.sh@352 -- # local d=2 00:08:54.531 10:36:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.531 10:36:24 -- scripts/common.sh@354 -- # echo 2 00:08:54.531 10:36:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:54.531 10:36:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:54.531 10:36:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:54.531 10:36:24 -- scripts/common.sh@367 -- # return 0 00:08:54.531 10:36:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.531 10:36:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:54.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.531 --rc genhtml_branch_coverage=1 00:08:54.531 --rc genhtml_function_coverage=1 00:08:54.531 --rc genhtml_legend=1 00:08:54.531 --rc geninfo_all_blocks=1 00:08:54.531 --rc geninfo_unexecuted_blocks=1 00:08:54.531 00:08:54.531 ' 00:08:54.531 10:36:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:54.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.531 --rc genhtml_branch_coverage=1 00:08:54.531 --rc genhtml_function_coverage=1 00:08:54.531 --rc genhtml_legend=1 00:08:54.531 --rc geninfo_all_blocks=1 00:08:54.531 --rc geninfo_unexecuted_blocks=1 00:08:54.531 00:08:54.531 ' 00:08:54.531 10:36:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:54.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.531 --rc genhtml_branch_coverage=1 00:08:54.531 --rc genhtml_function_coverage=1 00:08:54.531 --rc genhtml_legend=1 00:08:54.531 --rc geninfo_all_blocks=1 00:08:54.531 --rc geninfo_unexecuted_blocks=1 00:08:54.531 00:08:54.531 ' 00:08:54.531 10:36:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:54.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.531 --rc genhtml_branch_coverage=1 00:08:54.531 --rc genhtml_function_coverage=1 00:08:54.531 --rc genhtml_legend=1 00:08:54.531 --rc geninfo_all_blocks=1 00:08:54.531 --rc geninfo_unexecuted_blocks=1 00:08:54.531 00:08:54.531 ' 00:08:54.531 10:36:24 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:55.471 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:55.471 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.471 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.471 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.471 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.729 10:36:26 -- nvme/nvme.sh@79 -- # uname 00:08:55.729 10:36:26 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:55.729 10:36:26 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:55.729 10:36:26 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:55.729 10:36:26 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:55.729 10:36:26 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:08:55.729 10:36:26 -- common/autotest_common.sh@1055 -- # echo 0 00:08:55.729 Waiting for stub to ready for secondary processes... 00:08:55.729 10:36:26 -- common/autotest_common.sh@1057 -- # stubpid=63541 00:08:55.729 10:36:26 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:08:55.729 10:36:26 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:55.729 10:36:26 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:55.729 10:36:26 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63541 ]] 00:08:55.729 10:36:26 -- common/autotest_common.sh@1062 -- # sleep 1s 00:08:55.729 [2024-12-03 10:36:26.119992] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:55.729 [2024-12-03 10:36:26.120110] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.308 [2024-12-03 10:36:26.844571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:56.569 [2024-12-03 10:36:27.035019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.569 [2024-12-03 10:36:27.035172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.569 [2024-12-03 10:36:27.035180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.569 [2024-12-03 10:36:27.054238] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:56.569 [2024-12-03 10:36:27.065198] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:56.569 [2024-12-03 10:36:27.065348] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:56.569 [2024-12-03 10:36:27.080476] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:56.569 [2024-12-03 10:36:27.080633] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:56.569 [2024-12-03 10:36:27.080744] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:56.569 [2024-12-03 10:36:27.087826] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:56.569 [2024-12-03 10:36:27.088049] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:56.569 [2024-12-03 10:36:27.088155] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:56.569 10:36:27 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:56.569 10:36:27 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63541 ]] 00:08:56.569 [2024-12-03 10:36:27.095442] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:56.569 10:36:27 -- common/autotest_common.sh@1062 -- # sleep 1s 00:08:56.569 [2024-12-03 10:36:27.095564] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:56.569 [2024-12-03 10:36:27.095664] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:56.569 [2024-12-03 10:36:27.095765] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:56.569 [2024-12-03 10:36:27.095901] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:57.504 done. 00:08:57.504 10:36:28 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:57.504 10:36:28 -- common/autotest_common.sh@1064 -- # echo done. 00:08:57.504 10:36:28 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:57.504 10:36:28 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:08:57.504 10:36:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.504 10:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:57.504 ************************************ 00:08:57.504 START TEST nvme_reset 00:08:57.504 ************************************ 00:08:57.504 10:36:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:57.764 Initializing NVMe Controllers 00:08:57.764 Skipping QEMU NVMe SSD at 0000:00:06.0 00:08:57.764 Skipping QEMU NVMe SSD at 0000:00:07.0 00:08:57.764 Skipping QEMU NVMe SSD at 0000:00:09.0 00:08:57.764 Skipping QEMU NVMe SSD at 0000:00:08.0 00:08:57.764 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:57.764 00:08:57.764 real 0m0.207s 00:08:57.764 user 0m0.064s 00:08:57.764 sys 0m0.092s 00:08:57.764 10:36:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.764 10:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:57.764 ************************************ 00:08:57.764 END TEST nvme_reset 00:08:57.764 ************************************ 00:08:57.764 10:36:28 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:57.764 10:36:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:57.764 10:36:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.764 10:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:57.764 ************************************ 00:08:57.764 START TEST nvme_identify 00:08:57.764 ************************************ 00:08:57.764 10:36:28 -- common/autotest_common.sh@1114 -- # nvme_identify 00:08:57.764 10:36:28 -- nvme/nvme.sh@12 -- # bdfs=() 00:08:57.764 10:36:28 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:57.764 10:36:28 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:57.764 10:36:28 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:57.764 10:36:28 -- common/autotest_common.sh@1508 -- # bdfs=() 00:08:57.764 10:36:28 -- common/autotest_common.sh@1508 -- # local bdfs 00:08:57.764 10:36:28 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:57.764 10:36:28 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:57.764 10:36:28 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:08:58.025 10:36:28 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:08:58.025 10:36:28 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:08:58.025 10:36:28 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:58.025 ===================================================== 00:08:58.025 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:08:58.025 ===================================================== 00:08:58.025 Controller Capabilities/Features 00:08:58.025 ================================ 00:08:58.025 Vendor ID: 1b36 00:08:58.025 Subsystem Vendor ID: 1af4 00:08:58.025 Serial Number: 12340 00:08:58.025 Model Number: QEMU NVMe Ctrl 00:08:58.025 Firmware Version: 8.0.0 00:08:58.025 Recommended Arb Burst: 6 00:08:58.025 IEEE OUI Identifier: 00 54 52 00:08:58.025 Multi-path I/O 00:08:58.025 May have multiple subsystem ports: No 00:08:58.025 May have multiple controllers: No 00:08:58.025 Associated with SR-IOV VF: No 00:08:58.025 Max Data Transfer Size: 524288 00:08:58.025 Max Number of Namespaces: 256 00:08:58.025 Max Number of I/O Queues: 64 00:08:58.025 NVMe Specification Version (VS): 1.4 00:08:58.025 NVMe Specification Version (Identify): 1.4 00:08:58.025 Maximum Queue Entries: 2048 00:08:58.025 Contiguous Queues Required: Yes 00:08:58.025 Arbitration Mechanisms Supported 00:08:58.025 Weighted Round Robin: Not Supported 00:08:58.025 Vendor Specific: Not Supported 00:08:58.025 Reset Timeout: 7500 ms 00:08:58.025 Doorbell Stride: 4 bytes 00:08:58.025 NVM Subsystem Reset: Not Supported 00:08:58.025 Command Sets Supported 00:08:58.025 NVM Command Set: Supported 00:08:58.025 Boot Partition: Not Supported 00:08:58.025 Memory Page Size Minimum: 4096 bytes 00:08:58.025 Memory Page Size Maximum: 65536 bytes 00:08:58.025 Persistent Memory Region: Not Supported 00:08:58.025 Optional Asynchronous Events Supported 00:08:58.025 Namespace Attribute Notices: Supported 00:08:58.026 Firmware Activation Notices: Not Supported 00:08:58.026 ANA Change Notices: Not Supported 00:08:58.026 PLE Aggregate Log Change Notices: Not Supported 00:08:58.026 LBA Status Info Alert Notices: Not Supported 00:08:58.026 EGE Aggregate Log Change Notices: Not Supported 00:08:58.026 Normal NVM Subsystem Shutdown event: Not Supported 00:08:58.026 Zone Descriptor Change Notices: Not Supported 00:08:58.026 Discovery Log Change Notices: Not Supported 00:08:58.026 Controller Attributes 00:08:58.026 128-bit Host Identifier: Not Supported 00:08:58.026 Non-Operational Permissive Mode: Not Supported 00:08:58.026 NVM Sets: Not Supported 00:08:58.026 Read Recovery Levels: Not Supported 00:08:58.026 Endurance Groups: Not Supported 00:08:58.026 Predictable Latency Mode: Not Supported 00:08:58.026 Traffic Based Keep ALive: Not Supported 00:08:58.026 Namespace Granularity: Not Supported 00:08:58.026 SQ Associations: Not Supported 00:08:58.026 UUID List: Not Supported 00:08:58.026 Multi-Domain Subsystem: Not Supported 00:08:58.026 Fixed Capacity Management: Not Supported 00:08:58.026 Variable Capacity Management: Not Supported 00:08:58.026 Delete Endurance Group: Not Supported 00:08:58.026 Delete NVM Set: Not Supported 00:08:58.026 Extended LBA Formats Supported: Supported 00:08:58.026 Flexible Data Placement Supported: Not Supported 00:08:58.026 00:08:58.026 Controller Memory Buffer Support 00:08:58.026 ================================ 00:08:58.026 Supported: No 00:08:58.026 00:08:58.026 Persistent Memory Region Support 00:08:58.026 ================================ 00:08:58.026 Supported: No 00:08:58.026 00:08:58.026 Admin Command Set Attributes 00:08:58.026 ============================ 00:08:58.026 Security Send/Receive: Not Supported 00:08:58.026 Format NVM: Supported 00:08:58.026 Firmware Activate/Download: Not Supported 00:08:58.026 Namespace Management: Supported 00:08:58.026 Device Self-Test: Not Supported 00:08:58.026 Directives: Supported 00:08:58.026 NVMe-MI: Not Supported 00:08:58.026 Virtualization Management: Not Supported 00:08:58.026 Doorbell Buffer Config: Supported 00:08:58.026 Get LBA Status Capability: Not Supported 00:08:58.026 Command & Feature Lockdown Capability: Not Supported 00:08:58.026 Abort Command Limit: 4 00:08:58.026 Async Event Request Limit: 4 00:08:58.026 Number of Firmware Slots: N/A 00:08:58.026 Firmware Slot 1 Read-Only: N/A 00:08:58.026 Firmware Activation Without Reset: N/A 00:08:58.026 Multiple Update Detection Support: N/A 00:08:58.026 Firmware Update Granularity: No Information Provided 00:08:58.026 Per-Namespace SMART Log: Yes 00:08:58.026 Asymmetric Namespace Access Log Page: Not Supported 00:08:58.026 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:58.026 Command Effects Log Page: Supported 00:08:58.026 Get Log Page Extended Data: Supported 00:08:58.026 Telemetry Log Pages: Not Supported 00:08:58.026 Persistent Event Log Pages: Not Supported 00:08:58.026 Supported Log Pages Log Page: May Support 00:08:58.026 Commands Supported & Effects Log Page: Not Supported 00:08:58.026 Feature Identifiers & Effects Log Page:May Support 00:08:58.026 NVMe-MI Commands & Effects Log Page: May Support 00:08:58.026 Data Area 4 for Telemetry Log: Not Supported 00:08:58.026 Error Log Page Entries Supported: 1 00:08:58.026 Keep Alive: Not Supported 00:08:58.026 00:08:58.026 NVM Command Set Attributes 00:08:58.026 ========================== 00:08:58.026 Submission Queue Entry Size 00:08:58.026 Max: 64 00:08:58.026 Min: 64 00:08:58.026 Completion Queue Entry Size 00:08:58.026 Max: 16 00:08:58.026 Min: 16 00:08:58.026 Number of Namespaces: 256 00:08:58.026 Compare Command: Supported 00:08:58.026 Write Uncorrectable Command: Not Supported 00:08:58.026 Dataset Management Command: Supported 00:08:58.026 Write Zeroes Command: Supported 00:08:58.026 Set Features Save Field: Supported 00:08:58.026 Reservations: Not Supported 00:08:58.026 Timestamp: Supported 00:08:58.026 Copy: Supported 00:08:58.026 Volatile Write Cache: Present 00:08:58.026 Atomic Write Unit (Normal): 1 00:08:58.026 Atomic Write Unit (PFail): 1 00:08:58.026 Atomic Compare & Write Unit: 1 00:08:58.026 Fused Compare & Write: Not Supported 00:08:58.026 Scatter-Gather List 00:08:58.026 SGL Command Set: Supported 00:08:58.026 SGL Keyed: Not Supported 00:08:58.026 SGL Bit Bucket Descriptor: Not Supported 00:08:58.026 SGL Metadata Pointer: Not Supported 00:08:58.026 Oversized SGL: Not Supported 00:08:58.026 SGL Metadata Address: Not Supported 00:08:58.026 SGL Offset: Not Supported 00:08:58.026 Transport SGL Data Block: Not Supported 00:08:58.026 Replay Protected Memory Block: Not Supported 00:08:58.026 00:08:58.026 Firmware Slot Information 00:08:58.026 ========================= 00:08:58.026 Active slot: 1 00:08:58.026 Slot 1 Firmware Revision: 1.0 00:08:58.026 00:08:58.026 00:08:58.026 Commands Supported and Effects 00:08:58.026 ============================== 00:08:58.026 Admin Commands 00:08:58.026 -------------- 00:08:58.026 Delete I/O Submission Queue (00h): Supported 00:08:58.026 Create I/O Submission Queue (01h): Supported 00:08:58.026 Get Log Page (02h): Supported 00:08:58.026 Delete I/O Completion Queue (04h): Supported 00:08:58.026 Create I/O Completion Queue (05h): Supported 00:08:58.026 Identify (06h): Supported 00:08:58.026 Abort (08h): Supported 00:08:58.026 Set Features (09h): Supported 00:08:58.026 Get Features (0Ah): Supported 00:08:58.026 Asynchronous Event Request (0Ch): Supported 00:08:58.026 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:58.026 Directive Send (19h): Supported 00:08:58.026 Directive Receive (1Ah): Supported 00:08:58.026 Virtualization Management (1Ch): Supported 00:08:58.026 Doorbell Buffer Config (7Ch): Supported 00:08:58.026 Format NVM (80h): Supported LBA-Change 00:08:58.026 I/O Commands 00:08:58.026 ------------ 00:08:58.026 Flush (00h): Supported LBA-Change 00:08:58.026 Write (01h): Supported LBA-Change 00:08:58.026 Read (02h): Supported 00:08:58.026 Compare (05h): Supported 00:08:58.026 Write Zeroes (08h): Supported LBA-Change 00:08:58.026 Dataset Management (09h): Supported LBA-Change 00:08:58.026 Unknown (0Ch): Supported 00:08:58.026 Unknown (12h): Supported 00:08:58.026 Copy (19h): Supported LBA-Change 00:08:58.026 Unknown (1Dh): Supported LBA-Change 00:08:58.026 00:08:58.026 Error Log 00:08:58.026 ========= 00:08:58.026 00:08:58.026 Arbitration 00:08:58.026 =========== 00:08:58.026 Arbitration Burst: no limit 00:08:58.026 00:08:58.026 Power Management 00:08:58.026 ================ 00:08:58.026 Number of Power States: 1 00:08:58.026 Current Power State: Power State #0 00:08:58.026 Power State #0: 00:08:58.026 Max Power: 25.00 W 00:08:58.026 Non-Operational State: Operational 00:08:58.026 Entry Latency: 16 microseconds 00:08:58.026 Exit Latency: 4 microseconds 00:08:58.026 Relative Read Throughput: 0 00:08:58.026 Relative Read Latency: 0 00:08:58.026 Relative Write Throughput: 0 00:08:58.026 Relative Write Latency: 0 00:08:58.026 Idle Power[2024-12-03 10:36:28.584941] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 63583 terminated unexpected 00:08:58.026 [2024-12-03 10:36:28.585804] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 63583 terminated unexpected 00:08:58.026 : Not Reported 00:08:58.026 Active Power: Not Reported 00:08:58.026 Non-Operational Permissive Mode: Not Supported 00:08:58.026 00:08:58.026 Health Information 00:08:58.026 ================== 00:08:58.026 Critical Warnings: 00:08:58.026 Available Spare Space: OK 00:08:58.026 Temperature: OK 00:08:58.026 Device Reliability: OK 00:08:58.026 Read Only: No 00:08:58.026 Volatile Memory Backup: OK 00:08:58.026 Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.026 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:58.026 Available Spare: 0% 00:08:58.026 Available Spare Threshold: 0% 00:08:58.026 Life Percentage Used: 0% 00:08:58.026 Data Units Read: 1808 00:08:58.026 Data Units Written: 835 00:08:58.026 Host Read Commands: 86939 00:08:58.026 Host Write Commands: 43173 00:08:58.026 Controller Busy Time: 0 minutes 00:08:58.026 Power Cycles: 0 00:08:58.026 Power On Hours: 0 hours 00:08:58.026 Unsafe Shutdowns: 0 00:08:58.026 Unrecoverable Media Errors: 0 00:08:58.026 Lifetime Error Log Entries: 0 00:08:58.026 Warning Temperature Time: 0 minutes 00:08:58.026 Critical Temperature Time: 0 minutes 00:08:58.026 00:08:58.026 Number of Queues 00:08:58.026 ================ 00:08:58.026 Number of I/O Submission Queues: 64 00:08:58.026 Number of I/O Completion Queues: 64 00:08:58.026 00:08:58.026 ZNS Specific Controller Data 00:08:58.026 ============================ 00:08:58.026 Zone Append Size Limit: 0 00:08:58.026 00:08:58.026 00:08:58.026 Active Namespaces 00:08:58.026 ================= 00:08:58.026 Namespace ID:1 00:08:58.027 Error Recovery Timeout: Unlimited 00:08:58.027 Command Set Identifier: NVM (00h) 00:08:58.027 Deallocate: Supported 00:08:58.027 Deallocated/Unwritten Error: Supported 00:08:58.027 Deallocated Read Value: All 0x00 00:08:58.027 Deallocate in Write Zeroes: Not Supported 00:08:58.027 Deallocated Guard Field: 0xFFFF 00:08:58.027 Flush: Supported 00:08:58.027 Reservation: Not Supported 00:08:58.027 Metadata Transferred as: Separate Metadata Buffer 00:08:58.027 Namespace Sharing Capabilities: Private 00:08:58.027 Size (in LBAs): 1548666 (5GiB) 00:08:58.027 Capacity (in LBAs): 1548666 (5GiB) 00:08:58.027 Utilization (in LBAs): 1548666 (5GiB) 00:08:58.027 Thin Provisioning: Not Supported 00:08:58.027 Per-NS Atomic Units: No 00:08:58.027 Maximum Single Source Range Length: 128 00:08:58.027 Maximum Copy Length: 128 00:08:58.027 Maximum Source Range Count: 128 00:08:58.027 NGUID/EUI64 Never Reused: No 00:08:58.027 Namespace Write Protected: No 00:08:58.027 Number of LBA Formats: 8 00:08:58.027 Current LBA Format: LBA Format #07 00:08:58.027 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.027 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.027 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.027 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.027 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.027 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.027 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.027 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.027 00:08:58.027 ===================================================== 00:08:58.027 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:08:58.027 ===================================================== 00:08:58.027 Controller Capabilities/Features 00:08:58.027 ================================ 00:08:58.027 Vendor ID: 1b36 00:08:58.027 Subsystem Vendor ID: 1af4 00:08:58.027 Serial Number: 12341 00:08:58.027 Model Number: QEMU NVMe Ctrl 00:08:58.027 Firmware Version: 8.0.0 00:08:58.027 Recommended Arb Burst: 6 00:08:58.027 IEEE OUI Identifier: 00 54 52 00:08:58.027 Multi-path I/O 00:08:58.027 May have multiple subsystem ports: No 00:08:58.027 May have multiple controllers: No 00:08:58.027 Associated with SR-IOV VF: No 00:08:58.027 Max Data Transfer Size: 524288 00:08:58.027 Max Number of Namespaces: 256 00:08:58.027 Max Number of I/O Queues: 64 00:08:58.027 NVMe Specification Version (VS): 1.4 00:08:58.027 NVMe Specification Version (Identify): 1.4 00:08:58.027 Maximum Queue Entries: 2048 00:08:58.027 Contiguous Queues Required: Yes 00:08:58.027 Arbitration Mechanisms Supported 00:08:58.027 Weighted Round Robin: Not Supported 00:08:58.027 Vendor Specific: Not Supported 00:08:58.027 Reset Timeout: 7500 ms 00:08:58.027 Doorbell Stride: 4 bytes 00:08:58.027 NVM Subsystem Reset: Not Supported 00:08:58.027 Command Sets Supported 00:08:58.027 NVM Command Set: Supported 00:08:58.027 Boot Partition: Not Supported 00:08:58.027 Memory Page Size Minimum: 4096 bytes 00:08:58.027 Memory Page Size Maximum: 65536 bytes 00:08:58.027 Persistent Memory Region: Not Supported 00:08:58.027 Optional Asynchronous Events Supported 00:08:58.027 Namespace Attribute Notices: Supported 00:08:58.027 Firmware Activation Notices: Not Supported 00:08:58.027 ANA Change Notices: Not Supported 00:08:58.027 PLE Aggregate Log Change Notices: Not Supported 00:08:58.027 LBA Status Info Alert Notices: Not Supported 00:08:58.027 EGE Aggregate Log Change Notices: Not Supported 00:08:58.027 Normal NVM Subsystem Shutdown event: Not Supported 00:08:58.027 Zone Descriptor Change Notices: Not Supported 00:08:58.027 Discovery Log Change Notices: Not Supported 00:08:58.027 Controller Attributes 00:08:58.027 128-bit Host Identifier: Not Supported 00:08:58.027 Non-Operational Permissive Mode: Not Supported 00:08:58.027 NVM Sets: Not Supported 00:08:58.027 Read Recovery Levels: Not Supported 00:08:58.027 Endurance Groups: Not Supported 00:08:58.027 Predictable Latency Mode: Not Supported 00:08:58.027 Traffic Based Keep ALive: Not Supported 00:08:58.027 Namespace Granularity: Not Supported 00:08:58.027 SQ Associations: Not Supported 00:08:58.027 UUID List: Not Supported 00:08:58.027 Multi-Domain Subsystem: Not Supported 00:08:58.027 Fixed Capacity Management: Not Supported 00:08:58.027 Variable Capacity Management: Not Supported 00:08:58.027 Delete Endurance Group: Not Supported 00:08:58.027 Delete NVM Set: Not Supported 00:08:58.027 Extended LBA Formats Supported: Supported 00:08:58.027 Flexible Data Placement Supported: Not Supported 00:08:58.027 00:08:58.027 Controller Memory Buffer Support 00:08:58.027 ================================ 00:08:58.027 Supported: No 00:08:58.027 00:08:58.027 Persistent Memory Region Support 00:08:58.027 ================================ 00:08:58.027 Supported: No 00:08:58.027 00:08:58.027 Admin Command Set Attributes 00:08:58.027 ============================ 00:08:58.027 Security Send/Receive: Not Supported 00:08:58.027 Format NVM: Supported 00:08:58.027 Firmware Activate/Download: Not Supported 00:08:58.027 Namespace Management: Supported 00:08:58.027 Device Self-Test: Not Supported 00:08:58.027 Directives: Supported 00:08:58.027 NVMe-MI: Not Supported 00:08:58.027 Virtualization Management: Not Supported 00:08:58.027 Doorbell Buffer Config: Supported 00:08:58.027 Get LBA Status Capability: Not Supported 00:08:58.027 Command & Feature Lockdown Capability: Not Supported 00:08:58.027 Abort Command Limit: 4 00:08:58.027 Async Event Request Limit: 4 00:08:58.027 Number of Firmware Slots: N/A 00:08:58.027 Firmware Slot 1 Read-Only: N/A 00:08:58.027 Firmware Activation Without Reset: N/A 00:08:58.027 Multiple Update Detection Support: N/A 00:08:58.027 Firmware Update Granularity: No Information Provided 00:08:58.027 Per-Namespace SMART Log: Yes 00:08:58.027 Asymmetric Namespace Access Log Page: Not Supported 00:08:58.027 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:58.027 Command Effects Log Page: Supported 00:08:58.027 Get Log Page Extended Data: Supported 00:08:58.027 Telemetry Log Pages: Not Supported 00:08:58.027 Persistent Event Log Pages: Not Supported 00:08:58.027 Supported Log Pages Log Page: May Support 00:08:58.027 Commands Supported & Effects Log Page: Not Supported 00:08:58.027 Feature Identifiers & Effects Log Page:May Support 00:08:58.027 NVMe-MI Commands & Effects Log Page: May Support 00:08:58.027 Data Area 4 for Telemetry Log: Not Supported 00:08:58.027 Error Log Page Entries Supported: 1 00:08:58.027 Keep Alive: Not Supported 00:08:58.027 00:08:58.027 NVM Command Set Attributes 00:08:58.027 ========================== 00:08:58.027 Submission Queue Entry Size 00:08:58.027 Max: 64 00:08:58.027 Min: 64 00:08:58.027 Completion Queue Entry Size 00:08:58.027 Max: 16 00:08:58.027 Min: 16 00:08:58.027 Number of Namespaces: 256 00:08:58.027 Compare Command: Supported 00:08:58.027 Write Uncorrectable Command: Not Supported 00:08:58.027 Dataset Management Command: Supported 00:08:58.027 Write Zeroes Command: Supported 00:08:58.027 Set Features Save Field: Supported 00:08:58.027 Reservations: Not Supported 00:08:58.027 Timestamp: Supported 00:08:58.027 Copy: Supported 00:08:58.027 Volatile Write Cache: Present 00:08:58.027 Atomic Write Unit (Normal): 1 00:08:58.027 Atomic Write Unit (PFail): 1 00:08:58.027 Atomic Compare & Write Unit: 1 00:08:58.027 Fused Compare & Write: Not Supported 00:08:58.027 Scatter-Gather List 00:08:58.027 SGL Command Set: Supported 00:08:58.027 SGL Keyed: Not Supported 00:08:58.027 SGL Bit Bucket Descriptor: Not Supported 00:08:58.027 SGL Metadata Pointer: Not Supported 00:08:58.027 Oversized SGL: Not Supported 00:08:58.027 SGL Metadata Address: Not Supported 00:08:58.027 SGL Offset: Not Supported 00:08:58.027 Transport SGL Data Block: Not Supported 00:08:58.027 Replay Protected Memory Block: Not Supported 00:08:58.027 00:08:58.027 Firmware Slot Information 00:08:58.027 ========================= 00:08:58.027 Active slot: 1 00:08:58.027 Slot 1 Firmware Revision: 1.0 00:08:58.027 00:08:58.027 00:08:58.027 Commands Supported and Effects 00:08:58.027 ============================== 00:08:58.027 Admin Commands 00:08:58.027 -------------- 00:08:58.027 Delete I/O Submission Queue (00h): Supported 00:08:58.027 Create I/O Submission Queue (01h): Supported 00:08:58.027 Get Log Page (02h): Supported 00:08:58.027 Delete I/O Completion Queue (04h): Supported 00:08:58.027 Create I/O Completion Queue (05h): Supported 00:08:58.027 Identify (06h): Supported 00:08:58.027 Abort (08h): Supported 00:08:58.027 Set Features (09h): Supported 00:08:58.027 Get Features (0Ah): Supported 00:08:58.027 Asynchronous Event Request (0Ch): Supported 00:08:58.027 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:58.027 Directive Send (19h): Supported 00:08:58.027 Directive Receive (1Ah): Supported 00:08:58.027 Virtualization Management (1Ch): Supported 00:08:58.027 Doorbell Buffer Config (7Ch): Supported 00:08:58.028 Format NVM (80h): Supported LBA-Change 00:08:58.028 I/O Commands 00:08:58.028 ------------ 00:08:58.028 Flush (00h): Supported LBA-Change 00:08:58.028 Write (01h): Supported LBA-Change 00:08:58.028 Read (02h): Supported 00:08:58.028 Compare (05h): Supported 00:08:58.028 Write Zeroes (08h): Supported LBA-Change 00:08:58.028 Dataset Management (09h): Supported LBA-Change 00:08:58.028 Unknown (0Ch): Supported 00:08:58.028 Unknown (12h): Supported 00:08:58.028 Copy (19h): Supported LBA-Change 00:08:58.028 Unknown (1Dh): Supported LBA-Change 00:08:58.028 00:08:58.028 Error Log 00:08:58.028 ========= 00:08:58.028 00:08:58.028 Arbitration 00:08:58.028 =========== 00:08:58.028 Arbitration Burst: no limit 00:08:58.028 00:08:58.028 Power Management 00:08:58.028 ================ 00:08:58.028 Number of Power States: 1 00:08:58.028 Current Power State: Power State #0 00:08:58.028 Power State #0: 00:08:58.028 Max Power: 25.00 W 00:08:58.028 Non-Operational State: Operational 00:08:58.028 Entry Latency: 16 microseconds 00:08:58.028 Exit Latency: 4 microseconds 00:08:58.028 Relative Read Throughput: 0 00:08:58.028 Relative Read Latency: 0 00:08:58.028 Relative Write Throughput: 0 00:08:58.028 Relative Write Latency: 0 00:08:58.028 Idle Power: Not Reported 00:08:58.028 Active Power: Not Reported 00:08:58.028 Non-Operational Permissive Mode: Not Supported 00:08:58.028 00:08:58.028 Health Information 00:08:58.028 ================== 00:08:58.028 Critical Warnings: 00:08:58.028 Available Spare Space: OK 00:08:58.028 Temperature: OK 00:08:58.028 Device Reliability: OK 00:08:58.028 Read Only: No 00:08:58.028 Volatile Memory Backup: OK 00:08:58.028 Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.028 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:58.028 Available Spare: 0% 00:08:58.028 Available Spare Threshold: 0% 00:08:58.028 Life Percentage Used: 0% 00:08:58.028 Data Units Read: 1219 00:08:58.028 Data Units Written: 566 00:08:58.028 Host Read Commands: 59720 00:08:58.028 Host Write Commands: 29375 00:08:58.028 Controller Busy Time: 0 minutes 00:08:58.028 Power Cycles: 0 00:08:58.028 Power On Hours: 0 hours 00:08:58.028 Unsafe Shutdowns: 0 00:08:58.028 Unrecoverable Media Errors: 0 00:08:58.028 Lifetime Error Log Entries: 0 00:08:58.028 Warning Temperature Time: 0 minutes 00:08:58.028 Critical Temperature Time: 0 minutes 00:08:58.028 00:08:58.028 Number of Queues 00:08:58.028 ================ 00:08:58.028 Number of I/O Submission Queues: 64 00:08:58.028 Number of I/O Completion Queues: 64 00:08:58.028 00:08:58.028 ZNS Specific Controller Data 00:08:58.028 ============================ 00:08:58.028 Zone Append Size Limit: 0 00:08:58.028 00:08:58.028 00:08:58.028 Active Namespaces 00:08:58.028 ================= 00:08:58.028 Namespace ID:1 00:08:58.028 Error Recovery Timeout: Unlimited 00:08:58.028 Command Set Identifier: [2024-12-03 10:36:28.587287] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 63583 terminated unexpected 00:08:58.028 NVM (00h) 00:08:58.028 Deallocate: Supported 00:08:58.028 Deallocated/Unwritten Error: Supported 00:08:58.028 Deallocated Read Value: All 0x00 00:08:58.028 Deallocate in Write Zeroes: Not Supported 00:08:58.028 Deallocated Guard Field: 0xFFFF 00:08:58.028 Flush: Supported 00:08:58.028 Reservation: Not Supported 00:08:58.028 Namespace Sharing Capabilities: Private 00:08:58.028 Size (in LBAs): 1310720 (5GiB) 00:08:58.028 Capacity (in LBAs): 1310720 (5GiB) 00:08:58.028 Utilization (in LBAs): 1310720 (5GiB) 00:08:58.028 Thin Provisioning: Not Supported 00:08:58.028 Per-NS Atomic Units: No 00:08:58.028 Maximum Single Source Range Length: 128 00:08:58.028 Maximum Copy Length: 128 00:08:58.028 Maximum Source Range Count: 128 00:08:58.028 NGUID/EUI64 Never Reused: No 00:08:58.028 Namespace Write Protected: No 00:08:58.028 Number of LBA Formats: 8 00:08:58.028 Current LBA Format: LBA Format #04 00:08:58.028 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.028 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.028 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.028 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.028 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.028 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.028 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.028 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.028 00:08:58.028 ===================================================== 00:08:58.028 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:08:58.028 ===================================================== 00:08:58.028 Controller Capabilities/Features 00:08:58.028 ================================ 00:08:58.028 Vendor ID: 1b36 00:08:58.028 Subsystem Vendor ID: 1af4 00:08:58.028 Serial Number: 12343 00:08:58.028 Model Number: QEMU NVMe Ctrl 00:08:58.028 Firmware Version: 8.0.0 00:08:58.028 Recommended Arb Burst: 6 00:08:58.028 IEEE OUI Identifier: 00 54 52 00:08:58.028 Multi-path I/O 00:08:58.028 May have multiple subsystem ports: No 00:08:58.028 May have multiple controllers: Yes 00:08:58.028 Associated with SR-IOV VF: No 00:08:58.028 Max Data Transfer Size: 524288 00:08:58.028 Max Number of Namespaces: 256 00:08:58.028 Max Number of I/O Queues: 64 00:08:58.028 NVMe Specification Version (VS): 1.4 00:08:58.028 NVMe Specification Version (Identify): 1.4 00:08:58.028 Maximum Queue Entries: 2048 00:08:58.028 Contiguous Queues Required: Yes 00:08:58.028 Arbitration Mechanisms Supported 00:08:58.028 Weighted Round Robin: Not Supported 00:08:58.028 Vendor Specific: Not Supported 00:08:58.028 Reset Timeout: 7500 ms 00:08:58.028 Doorbell Stride: 4 bytes 00:08:58.028 NVM Subsystem Reset: Not Supported 00:08:58.028 Command Sets Supported 00:08:58.028 NVM Command Set: Supported 00:08:58.028 Boot Partition: Not Supported 00:08:58.028 Memory Page Size Minimum: 4096 bytes 00:08:58.028 Memory Page Size Maximum: 65536 bytes 00:08:58.028 Persistent Memory Region: Not Supported 00:08:58.028 Optional Asynchronous Events Supported 00:08:58.028 Namespace Attribute Notices: Supported 00:08:58.028 Firmware Activation Notices: Not Supported 00:08:58.028 ANA Change Notices: Not Supported 00:08:58.028 PLE Aggregate Log Change Notices: Not Supported 00:08:58.028 LBA Status Info Alert Notices: Not Supported 00:08:58.028 EGE Aggregate Log Change Notices: Not Supported 00:08:58.028 Normal NVM Subsystem Shutdown event: Not Supported 00:08:58.028 Zone Descriptor Change Notices: Not Supported 00:08:58.028 Discovery Log Change Notices: Not Supported 00:08:58.028 Controller Attributes 00:08:58.028 128-bit Host Identifier: Not Supported 00:08:58.028 Non-Operational Permissive Mode: Not Supported 00:08:58.028 NVM Sets: Not Supported 00:08:58.028 Read Recovery Levels: Not Supported 00:08:58.028 Endurance Groups: Supported 00:08:58.028 Predictable Latency Mode: Not Supported 00:08:58.028 Traffic Based Keep ALive: Not Supported 00:08:58.028 Namespace Granularity: Not Supported 00:08:58.028 SQ Associations: Not Supported 00:08:58.028 UUID List: Not Supported 00:08:58.028 Multi-Domain Subsystem: Not Supported 00:08:58.028 Fixed Capacity Management: Not Supported 00:08:58.028 Variable Capacity Management: Not Supported 00:08:58.028 Delete Endurance Group: Not Supported 00:08:58.028 Delete NVM Set: Not Supported 00:08:58.028 Extended LBA Formats Supported: Supported 00:08:58.028 Flexible Data Placement Supported: Supported 00:08:58.028 00:08:58.028 Controller Memory Buffer Support 00:08:58.028 ================================ 00:08:58.028 Supported: No 00:08:58.028 00:08:58.028 Persistent Memory Region Support 00:08:58.028 ================================ 00:08:58.028 Supported: No 00:08:58.028 00:08:58.028 Admin Command Set Attributes 00:08:58.028 ============================ 00:08:58.028 Security Send/Receive: Not Supported 00:08:58.028 Format NVM: Supported 00:08:58.028 Firmware Activate/Download: Not Supported 00:08:58.028 Namespace Management: Supported 00:08:58.028 Device Self-Test: Not Supported 00:08:58.028 Directives: Supported 00:08:58.028 NVMe-MI: Not Supported 00:08:58.028 Virtualization Management: Not Supported 00:08:58.028 Doorbell Buffer Config: Supported 00:08:58.028 Get LBA Status Capability: Not Supported 00:08:58.028 Command & Feature Lockdown Capability: Not Supported 00:08:58.028 Abort Command Limit: 4 00:08:58.028 Async Event Request Limit: 4 00:08:58.028 Number of Firmware Slots: N/A 00:08:58.028 Firmware Slot 1 Read-Only: N/A 00:08:58.028 Firmware Activation Without Reset: N/A 00:08:58.028 Multiple Update Detection Support: N/A 00:08:58.028 Firmware Update Granularity: No Information Provided 00:08:58.028 Per-Namespace SMART Log: Yes 00:08:58.029 Asymmetric Namespace Access Log Page: Not Supported 00:08:58.029 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:58.029 Command Effects Log Page: Supported 00:08:58.029 Get Log Page Extended Data: Supported 00:08:58.029 Telemetry Log Pages: Not Supported 00:08:58.029 Persistent Event Log Pages: Not Supported 00:08:58.029 Supported Log Pages Log Page: May Support 00:08:58.029 Commands Supported & Effects Log Page: Not Supported 00:08:58.029 Feature Identifiers & Effects Log Page:May Support 00:08:58.029 NVMe-MI Commands & Effects Log Page: May Support 00:08:58.029 Data Area 4 for Telemetry Log: Not Supported 00:08:58.029 Error Log Page Entries Supported: 1 00:08:58.029 Keep Alive: Not Supported 00:08:58.029 00:08:58.029 NVM Command Set Attributes 00:08:58.029 ========================== 00:08:58.029 Submission Queue Entry Size 00:08:58.029 Max: 64 00:08:58.029 Min: 64 00:08:58.029 Completion Queue Entry Size 00:08:58.029 Max: 16 00:08:58.029 Min: 16 00:08:58.029 Number of Namespaces: 256 00:08:58.029 Compare Command: Supported 00:08:58.029 Write Uncorrectable Command: Not Supported 00:08:58.029 Dataset Management Command: Supported 00:08:58.029 Write Zeroes Command: Supported 00:08:58.029 Set Features Save Field: Supported 00:08:58.029 Reservations: Not Supported 00:08:58.029 Timestamp: Supported 00:08:58.029 Copy: Supported 00:08:58.029 Volatile Write Cache: Present 00:08:58.029 Atomic Write Unit (Normal): 1 00:08:58.029 Atomic Write Unit (PFail): 1 00:08:58.029 Atomic Compare & Write Unit: 1 00:08:58.029 Fused Compare & Write: Not Supported 00:08:58.029 Scatter-Gather List 00:08:58.029 SGL Command Set: Supported 00:08:58.029 SGL Keyed: Not Supported 00:08:58.029 SGL Bit Bucket Descriptor: Not Supported 00:08:58.029 SGL Metadata Pointer: Not Supported 00:08:58.029 Oversized SGL: Not Supported 00:08:58.029 SGL Metadata Address: Not Supported 00:08:58.029 SGL Offset: Not Supported 00:08:58.029 Transport SGL Data Block: Not Supported 00:08:58.029 Replay Protected Memory Block: Not Supported 00:08:58.029 00:08:58.029 Firmware Slot Information 00:08:58.029 ========================= 00:08:58.029 Active slot: 1 00:08:58.029 Slot 1 Firmware Revision: 1.0 00:08:58.029 00:08:58.029 00:08:58.029 Commands Supported and Effects 00:08:58.029 ============================== 00:08:58.029 Admin Commands 00:08:58.029 -------------- 00:08:58.029 Delete I/O Submission Queue (00h): Supported 00:08:58.029 Create I/O Submission Queue (01h): Supported 00:08:58.029 Get Log Page (02h): Supported 00:08:58.029 Delete I/O Completion Queue (04h): Supported 00:08:58.029 Create I/O Completion Queue (05h): Supported 00:08:58.029 Identify (06h): Supported 00:08:58.029 Abort (08h): Supported 00:08:58.029 Set Features (09h): Supported 00:08:58.029 Get Features (0Ah): Supported 00:08:58.029 Asynchronous Event Request (0Ch): Supported 00:08:58.029 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:58.029 Directive Send (19h): Supported 00:08:58.029 Directive Receive (1Ah): Supported 00:08:58.029 Virtualization Management (1Ch): Supported 00:08:58.029 Doorbell Buffer Config (7Ch): Supported 00:08:58.029 Format NVM (80h): Supported LBA-Change 00:08:58.029 I/O Commands 00:08:58.029 ------------ 00:08:58.029 Flush (00h): Supported LBA-Change 00:08:58.029 Write (01h): Supported LBA-Change 00:08:58.029 Read (02h): Supported 00:08:58.029 Compare (05h): Supported 00:08:58.029 Write Zeroes (08h): Supported LBA-Change 00:08:58.029 Dataset Management (09h): Supported LBA-Change 00:08:58.029 Unknown (0Ch): Supported 00:08:58.029 Unknown (12h): Supported 00:08:58.029 Copy (19h): Supported LBA-Change 00:08:58.029 Unknown (1Dh): Supported LBA-Change 00:08:58.029 00:08:58.029 Error Log 00:08:58.029 ========= 00:08:58.029 00:08:58.029 Arbitration 00:08:58.029 =========== 00:08:58.029 Arbitration Burst: no limit 00:08:58.029 00:08:58.029 Power Management 00:08:58.029 ================ 00:08:58.029 Number of Power States: 1 00:08:58.029 Current Power State: Power State #0 00:08:58.029 Power State #0: 00:08:58.029 Max Power: 25.00 W 00:08:58.029 Non-Operational State: Operational 00:08:58.029 Entry Latency: 16 microseconds 00:08:58.029 Exit Latency: 4 microseconds 00:08:58.029 Relative Read Throughput: 0 00:08:58.029 Relative Read Latency: 0 00:08:58.029 Relative Write Throughput: 0 00:08:58.029 Relative Write Latency: 0 00:08:58.029 Idle Power: Not Reported 00:08:58.029 Active Power: Not Reported 00:08:58.029 Non-Operational Permissive Mode: Not Supported 00:08:58.029 00:08:58.029 Health Information 00:08:58.029 ================== 00:08:58.029 Critical Warnings: 00:08:58.029 Available Spare Space: OK 00:08:58.029 Temperature: OK 00:08:58.029 Device Reliability: OK 00:08:58.029 Read Only: No 00:08:58.029 Volatile Memory Backup: OK 00:08:58.029 Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.029 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:58.029 Available Spare: 0% 00:08:58.029 Available Spare Threshold: 0% 00:08:58.029 Life Percentage Used: 0% 00:08:58.029 Data Units Read: 1327 00:08:58.029 Data Units Written: 612 00:08:58.029 Host Read Commands: 60639 00:08:58.029 Host Write Commands: 29776 00:08:58.029 Controller Busy Time: 0 minutes 00:08:58.029 Power Cycles: 0 00:08:58.029 Power On Hours: 0 hours 00:08:58.029 Unsafe Shutdowns: 0 00:08:58.029 Unrecoverable Media Errors: 0 00:08:58.029 Lifetime Error Log Entries: 0 00:08:58.029 Warning Temperature Time: 0 minutes 00:08:58.029 Critical Temperature Time: 0 minutes 00:08:58.029 00:08:58.029 Number of Queues 00:08:58.029 ================ 00:08:58.029 Number of I/O Submission Queues: 64 00:08:58.029 Number of I/O Completion Queues: 64 00:08:58.029 00:08:58.029 ZNS Specific Controller Data 00:08:58.029 ============================ 00:08:58.029 Zone Append Size Limit: 0 00:08:58.029 00:08:58.029 00:08:58.029 Active Namespaces 00:08:58.029 ================= 00:08:58.029 Namespace ID:1 00:08:58.029 Error Recovery Timeout: Unlimited 00:08:58.029 Command Set Identifier: NVM (00h) 00:08:58.029 Deallocate: Supported 00:08:58.029 Deallocated/Unwritten Error: Supported 00:08:58.029 Deallocated Read Value: All 0x00 00:08:58.029 Deallocate in Write Zeroes: Not Supported 00:08:58.029 Deallocated Guard Field: 0xFFFF 00:08:58.029 Flush: Supported 00:08:58.029 Reservation: Not Supported 00:08:58.029 Namespace Sharing Capabilities: Multiple Controllers 00:08:58.029 Size (in LBAs): 262144 (1GiB) 00:08:58.029 Capacity (in LBAs): 262144 (1GiB) 00:08:58.029 Utilization (in LBAs): 262144 (1GiB) 00:08:58.029 Thin Provisioning: Not Supported 00:08:58.029 Per-NS Atomic Units: No 00:08:58.029 Maximum Single Source Range Length: 128 00:08:58.029 Maximum Copy Length: 128 00:08:58.029 Maximum Source Range Count: 128 00:08:58.029 NGUID/EUI64 Never Reused: No 00:08:58.029 Namespace Write Protected: No 00:08:58.029 Endurance group ID: 1 00:08:58.029 Number of LBA Formats: 8 00:08:58.029 Current LBA Format: LBA Format #04 00:08:58.029 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.029 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.029 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.029 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.029 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.029 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.029 LBA Format #06: Data Si[2024-12-03 10:36:28.588772] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 63583 terminated unexpected 00:08:58.029 ze: 4096 Metadata Size: 16 00:08:58.029 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.029 00:08:58.029 Get Feature FDP: 00:08:58.029 ================ 00:08:58.029 Enabled: Yes 00:08:58.029 FDP configuration index: 0 00:08:58.029 00:08:58.029 FDP configurations log page 00:08:58.029 =========================== 00:08:58.029 Number of FDP configurations: 1 00:08:58.029 Version: 0 00:08:58.029 Size: 112 00:08:58.029 FDP Configuration Descriptor: 0 00:08:58.029 Descriptor Size: 96 00:08:58.029 Reclaim Group Identifier format: 2 00:08:58.029 FDP Volatile Write Cache: Not Present 00:08:58.029 FDP Configuration: Valid 00:08:58.029 Vendor Specific Size: 0 00:08:58.029 Number of Reclaim Groups: 2 00:08:58.029 Number of Recalim Unit Handles: 8 00:08:58.029 Max Placement Identifiers: 128 00:08:58.029 Number of Namespaces Suppprted: 256 00:08:58.029 Reclaim unit Nominal Size: 6000000 bytes 00:08:58.029 Estimated Reclaim Unit Time Limit: Not Reported 00:08:58.029 RUH Desc #000: RUH Type: Initially Isolated 00:08:58.029 RUH Desc #001: RUH Type: Initially Isolated 00:08:58.029 RUH Desc #002: RUH Type: Initially Isolated 00:08:58.029 RUH Desc #003: RUH Type: Initially Isolated 00:08:58.029 RUH Desc #004: RUH Type: Initially Isolated 00:08:58.029 RUH Desc #005: RUH Type: Initially Isolated 00:08:58.029 RUH Desc #006: RUH Type: Initially Isolated 00:08:58.030 RUH Desc #007: RUH Type: Initially Isolated 00:08:58.030 00:08:58.030 FDP reclaim unit handle usage log page 00:08:58.030 ====================================== 00:08:58.030 Number of Reclaim Unit Handles: 8 00:08:58.030 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:58.030 RUH Usage Desc #001: RUH Attributes: Unused 00:08:58.030 RUH Usage Desc #002: RUH Attributes: Unused 00:08:58.030 RUH Usage Desc #003: RUH Attributes: Unused 00:08:58.030 RUH Usage Desc #004: RUH Attributes: Unused 00:08:58.030 RUH Usage Desc #005: RUH Attributes: Unused 00:08:58.030 RUH Usage Desc #006: RUH Attributes: Unused 00:08:58.030 RUH Usage Desc #007: RUH Attributes: Unused 00:08:58.030 00:08:58.030 FDP statistics log page 00:08:58.030 ======================= 00:08:58.030 Host bytes with metadata written: 402612224 00:08:58.030 Media bytes with metadata written: 402698240 00:08:58.030 Media bytes erased: 0 00:08:58.030 00:08:58.030 FDP events log page 00:08:58.030 =================== 00:08:58.030 Number of FDP events: 0 00:08:58.030 00:08:58.030 ===================================================== 00:08:58.030 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:08:58.030 ===================================================== 00:08:58.030 Controller Capabilities/Features 00:08:58.030 ================================ 00:08:58.030 Vendor ID: 1b36 00:08:58.030 Subsystem Vendor ID: 1af4 00:08:58.030 Serial Number: 12342 00:08:58.030 Model Number: QEMU NVMe Ctrl 00:08:58.030 Firmware Version: 8.0.0 00:08:58.030 Recommended Arb Burst: 6 00:08:58.030 IEEE OUI Identifier: 00 54 52 00:08:58.030 Multi-path I/O 00:08:58.030 May have multiple subsystem ports: No 00:08:58.030 May have multiple controllers: No 00:08:58.030 Associated with SR-IOV VF: No 00:08:58.030 Max Data Transfer Size: 524288 00:08:58.030 Max Number of Namespaces: 256 00:08:58.030 Max Number of I/O Queues: 64 00:08:58.030 NVMe Specification Version (VS): 1.4 00:08:58.030 NVMe Specification Version (Identify): 1.4 00:08:58.030 Maximum Queue Entries: 2048 00:08:58.030 Contiguous Queues Required: Yes 00:08:58.030 Arbitration Mechanisms Supported 00:08:58.030 Weighted Round Robin: Not Supported 00:08:58.030 Vendor Specific: Not Supported 00:08:58.030 Reset Timeout: 7500 ms 00:08:58.030 Doorbell Stride: 4 bytes 00:08:58.030 NVM Subsystem Reset: Not Supported 00:08:58.030 Command Sets Supported 00:08:58.030 NVM Command Set: Supported 00:08:58.030 Boot Partition: Not Supported 00:08:58.030 Memory Page Size Minimum: 4096 bytes 00:08:58.030 Memory Page Size Maximum: 65536 bytes 00:08:58.030 Persistent Memory Region: Not Supported 00:08:58.030 Optional Asynchronous Events Supported 00:08:58.030 Namespace Attribute Notices: Supported 00:08:58.030 Firmware Activation Notices: Not Supported 00:08:58.030 ANA Change Notices: Not Supported 00:08:58.030 PLE Aggregate Log Change Notices: Not Supported 00:08:58.030 LBA Status Info Alert Notices: Not Supported 00:08:58.030 EGE Aggregate Log Change Notices: Not Supported 00:08:58.030 Normal NVM Subsystem Shutdown event: Not Supported 00:08:58.030 Zone Descriptor Change Notices: Not Supported 00:08:58.030 Discovery Log Change Notices: Not Supported 00:08:58.030 Controller Attributes 00:08:58.030 128-bit Host Identifier: Not Supported 00:08:58.030 Non-Operational Permissive Mode: Not Supported 00:08:58.030 NVM Sets: Not Supported 00:08:58.030 Read Recovery Levels: Not Supported 00:08:58.030 Endurance Groups: Not Supported 00:08:58.030 Predictable Latency Mode: Not Supported 00:08:58.030 Traffic Based Keep ALive: Not Supported 00:08:58.030 Namespace Granularity: Not Supported 00:08:58.030 SQ Associations: Not Supported 00:08:58.030 UUID List: Not Supported 00:08:58.030 Multi-Domain Subsystem: Not Supported 00:08:58.030 Fixed Capacity Management: Not Supported 00:08:58.030 Variable Capacity Management: Not Supported 00:08:58.030 Delete Endurance Group: Not Supported 00:08:58.030 Delete NVM Set: Not Supported 00:08:58.030 Extended LBA Formats Supported: Supported 00:08:58.030 Flexible Data Placement Supported: Not Supported 00:08:58.030 00:08:58.030 Controller Memory Buffer Support 00:08:58.030 ================================ 00:08:58.030 Supported: No 00:08:58.030 00:08:58.030 Persistent Memory Region Support 00:08:58.030 ================================ 00:08:58.030 Supported: No 00:08:58.030 00:08:58.030 Admin Command Set Attributes 00:08:58.030 ============================ 00:08:58.030 Security Send/Receive: Not Supported 00:08:58.030 Format NVM: Supported 00:08:58.030 Firmware Activate/Download: Not Supported 00:08:58.030 Namespace Management: Supported 00:08:58.030 Device Self-Test: Not Supported 00:08:58.030 Directives: Supported 00:08:58.030 NVMe-MI: Not Supported 00:08:58.030 Virtualization Management: Not Supported 00:08:58.030 Doorbell Buffer Config: Supported 00:08:58.030 Get LBA Status Capability: Not Supported 00:08:58.030 Command & Feature Lockdown Capability: Not Supported 00:08:58.030 Abort Command Limit: 4 00:08:58.030 Async Event Request Limit: 4 00:08:58.030 Number of Firmware Slots: N/A 00:08:58.030 Firmware Slot 1 Read-Only: N/A 00:08:58.030 Firmware Activation Without Reset: N/A 00:08:58.030 Multiple Update Detection Support: N/A 00:08:58.030 Firmware Update Granularity: No Information Provided 00:08:58.030 Per-Namespace SMART Log: Yes 00:08:58.030 Asymmetric Namespace Access Log Page: Not Supported 00:08:58.030 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:58.030 Command Effects Log Page: Supported 00:08:58.030 Get Log Page Extended Data: Supported 00:08:58.030 Telemetry Log Pages: Not Supported 00:08:58.030 Persistent Event Log Pages: Not Supported 00:08:58.030 Supported Log Pages Log Page: May Support 00:08:58.030 Commands Supported & Effects Log Page: Not Supported 00:08:58.030 Feature Identifiers & Effects Log Page:May Support 00:08:58.030 NVMe-MI Commands & Effects Log Page: May Support 00:08:58.030 Data Area 4 for Telemetry Log: Not Supported 00:08:58.030 Error Log Page Entries Supported: 1 00:08:58.030 Keep Alive: Not Supported 00:08:58.030 00:08:58.030 NVM Command Set Attributes 00:08:58.030 ========================== 00:08:58.030 Submission Queue Entry Size 00:08:58.030 Max: 64 00:08:58.030 Min: 64 00:08:58.030 Completion Queue Entry Size 00:08:58.030 Max: 16 00:08:58.030 Min: 16 00:08:58.030 Number of Namespaces: 256 00:08:58.030 Compare Command: Supported 00:08:58.030 Write Uncorrectable Command: Not Supported 00:08:58.030 Dataset Management Command: Supported 00:08:58.030 Write Zeroes Command: Supported 00:08:58.030 Set Features Save Field: Supported 00:08:58.030 Reservations: Not Supported 00:08:58.030 Timestamp: Supported 00:08:58.030 Copy: Supported 00:08:58.030 Volatile Write Cache: Present 00:08:58.030 Atomic Write Unit (Normal): 1 00:08:58.030 Atomic Write Unit (PFail): 1 00:08:58.030 Atomic Compare & Write Unit: 1 00:08:58.030 Fused Compare & Write: Not Supported 00:08:58.030 Scatter-Gather List 00:08:58.030 SGL Command Set: Supported 00:08:58.030 SGL Keyed: Not Supported 00:08:58.030 SGL Bit Bucket Descriptor: Not Supported 00:08:58.030 SGL Metadata Pointer: Not Supported 00:08:58.030 Oversized SGL: Not Supported 00:08:58.030 SGL Metadata Address: Not Supported 00:08:58.030 SGL Offset: Not Supported 00:08:58.030 Transport SGL Data Block: Not Supported 00:08:58.030 Replay Protected Memory Block: Not Supported 00:08:58.030 00:08:58.030 Firmware Slot Information 00:08:58.030 ========================= 00:08:58.030 Active slot: 1 00:08:58.030 Slot 1 Firmware Revision: 1.0 00:08:58.030 00:08:58.030 00:08:58.030 Commands Supported and Effects 00:08:58.030 ============================== 00:08:58.030 Admin Commands 00:08:58.030 -------------- 00:08:58.031 Delete I/O Submission Queue (00h): Supported 00:08:58.031 Create I/O Submission Queue (01h): Supported 00:08:58.031 Get Log Page (02h): Supported 00:08:58.031 Delete I/O Completion Queue (04h): Supported 00:08:58.031 Create I/O Completion Queue (05h): Supported 00:08:58.031 Identify (06h): Supported 00:08:58.031 Abort (08h): Supported 00:08:58.031 Set Features (09h): Supported 00:08:58.031 Get Features (0Ah): Supported 00:08:58.031 Asynchronous Event Request (0Ch): Supported 00:08:58.031 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:58.031 Directive Send (19h): Supported 00:08:58.031 Directive Receive (1Ah): Supported 00:08:58.031 Virtualization Management (1Ch): Supported 00:08:58.031 Doorbell Buffer Config (7Ch): Supported 00:08:58.031 Format NVM (80h): Supported LBA-Change 00:08:58.031 I/O Commands 00:08:58.031 ------------ 00:08:58.031 Flush (00h): Supported LBA-Change 00:08:58.031 Write (01h): Supported LBA-Change 00:08:58.031 Read (02h): Supported 00:08:58.031 Compare (05h): Supported 00:08:58.031 Write Zeroes (08h): Supported LBA-Change 00:08:58.031 Dataset Management (09h): Supported LBA-Change 00:08:58.031 Unknown (0Ch): Supported 00:08:58.031 Unknown (12h): Supported 00:08:58.031 Copy (19h): Supported LBA-Change 00:08:58.031 Unknown (1Dh): Supported LBA-Change 00:08:58.031 00:08:58.031 Error Log 00:08:58.031 ========= 00:08:58.031 00:08:58.031 Arbitration 00:08:58.031 =========== 00:08:58.031 Arbitration Burst: no limit 00:08:58.031 00:08:58.031 Power Management 00:08:58.031 ================ 00:08:58.031 Number of Power States: 1 00:08:58.031 Current Power State: Power State #0 00:08:58.031 Power State #0: 00:08:58.031 Max Power: 25.00 W 00:08:58.031 Non-Operational State: Operational 00:08:58.031 Entry Latency: 16 microseconds 00:08:58.031 Exit Latency: 4 microseconds 00:08:58.031 Relative Read Throughput: 0 00:08:58.031 Relative Read Latency: 0 00:08:58.031 Relative Write Throughput: 0 00:08:58.031 Relative Write Latency: 0 00:08:58.031 Idle Power: Not Reported 00:08:58.031 Active Power: Not Reported 00:08:58.031 Non-Operational Permissive Mode: Not Supported 00:08:58.031 00:08:58.031 Health Information 00:08:58.031 ================== 00:08:58.031 Critical Warnings: 00:08:58.031 Available Spare Space: OK 00:08:58.031 Temperature: OK 00:08:58.031 Device Reliability: OK 00:08:58.031 Read Only: No 00:08:58.031 Volatile Memory Backup: OK 00:08:58.031 Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.031 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:58.031 Available Spare: 0% 00:08:58.031 Available Spare Threshold: 0% 00:08:58.031 Life Percentage Used: 0% 00:08:58.031 Data Units Read: 3768 00:08:58.031 Data Units Written: 1742 00:08:58.031 Host Read Commands: 180385 00:08:58.031 Host Write Commands: 88523 00:08:58.031 Controller Busy Time: 0 minutes 00:08:58.031 Power Cycles: 0 00:08:58.031 Power On Hours: 0 hours 00:08:58.031 Unsafe Shutdowns: 0 00:08:58.031 Unrecoverable Media Errors: 0 00:08:58.031 Lifetime Error Log Entries: 0 00:08:58.031 Warning Temperature Time: 0 minutes 00:08:58.031 Critical Temperature Time: 0 minutes 00:08:58.031 00:08:58.031 Number of Queues 00:08:58.031 ================ 00:08:58.031 Number of I/O Submission Queues: 64 00:08:58.031 Number of I/O Completion Queues: 64 00:08:58.031 00:08:58.031 ZNS Specific Controller Data 00:08:58.031 ============================ 00:08:58.031 Zone Append Size Limit: 0 00:08:58.031 00:08:58.031 00:08:58.031 Active Namespaces 00:08:58.031 ================= 00:08:58.031 Namespace ID:1 00:08:58.031 Error Recovery Timeout: Unlimited 00:08:58.031 Command Set Identifier: NVM (00h) 00:08:58.031 Deallocate: Supported 00:08:58.031 Deallocated/Unwritten Error: Supported 00:08:58.031 Deallocated Read Value: All 0x00 00:08:58.031 Deallocate in Write Zeroes: Not Supported 00:08:58.031 Deallocated Guard Field: 0xFFFF 00:08:58.031 Flush: Supported 00:08:58.031 Reservation: Not Supported 00:08:58.031 Namespace Sharing Capabilities: Private 00:08:58.031 Size (in LBAs): 1048576 (4GiB) 00:08:58.031 Capacity (in LBAs): 1048576 (4GiB) 00:08:58.031 Utilization (in LBAs): 1048576 (4GiB) 00:08:58.031 Thin Provisioning: Not Supported 00:08:58.031 Per-NS Atomic Units: No 00:08:58.031 Maximum Single Source Range Length: 128 00:08:58.031 Maximum Copy Length: 128 00:08:58.031 Maximum Source Range Count: 128 00:08:58.031 NGUID/EUI64 Never Reused: No 00:08:58.031 Namespace Write Protected: No 00:08:58.031 Number of LBA Formats: 8 00:08:58.031 Current LBA Format: LBA Format #04 00:08:58.031 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.031 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.031 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.031 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.031 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.031 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.031 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.031 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.031 00:08:58.031 Namespace ID:2 00:08:58.031 Error Recovery Timeout: Unlimited 00:08:58.031 Command Set Identifier: NVM (00h) 00:08:58.031 Deallocate: Supported 00:08:58.031 Deallocated/Unwritten Error: Supported 00:08:58.031 Deallocated Read Value: All 0x00 00:08:58.031 Deallocate in Write Zeroes: Not Supported 00:08:58.031 Deallocated Guard Field: 0xFFFF 00:08:58.031 Flush: Supported 00:08:58.031 Reservation: Not Supported 00:08:58.031 Namespace Sharing Capabilities: Private 00:08:58.031 Size (in LBAs): 1048576 (4GiB) 00:08:58.031 Capacity (in LBAs): 1048576 (4GiB) 00:08:58.031 Utilization (in LBAs): 1048576 (4GiB) 00:08:58.031 Thin Provisioning: Not Supported 00:08:58.031 Per-NS Atomic Units: No 00:08:58.031 Maximum Single Source Range Length: 128 00:08:58.031 Maximum Copy Length: 128 00:08:58.031 Maximum Source Range Count: 128 00:08:58.031 NGUID/EUI64 Never Reused: No 00:08:58.031 Namespace Write Protected: No 00:08:58.031 Number of LBA Formats: 8 00:08:58.031 Current LBA Format: LBA Format #04 00:08:58.031 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.031 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.031 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.031 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.031 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.031 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.031 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.031 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.031 00:08:58.031 Namespace ID:3 00:08:58.031 Error Recovery Timeout: Unlimited 00:08:58.031 Command Set Identifier: NVM (00h) 00:08:58.031 Deallocate: Supported 00:08:58.031 Deallocated/Unwritten Error: Supported 00:08:58.031 Deallocated Read Value: All 0x00 00:08:58.031 Deallocate in Write Zeroes: Not Supported 00:08:58.031 Deallocated Guard Field: 0xFFFF 00:08:58.031 Flush: Supported 00:08:58.031 Reservation: Not Supported 00:08:58.031 Namespace Sharing Capabilities: Private 00:08:58.031 Size (in LBAs): 1048576 (4GiB) 00:08:58.031 Capacity (in LBAs): 1048576 (4GiB) 00:08:58.031 Utilization (in LBAs): 1048576 (4GiB) 00:08:58.031 Thin Provisioning: Not Supported 00:08:58.031 Per-NS Atomic Units: No 00:08:58.031 Maximum Single Source Range Length: 128 00:08:58.031 Maximum Copy Length: 128 00:08:58.031 Maximum Source Range Count: 128 00:08:58.031 NGUID/EUI64 Never Reused: No 00:08:58.031 Namespace Write Protected: No 00:08:58.031 Number of LBA Formats: 8 00:08:58.031 Current LBA Format: LBA Format #04 00:08:58.031 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.031 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.031 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.031 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.031 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.031 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.031 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.031 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.031 00:08:58.031 10:36:28 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:58.031 10:36:28 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:08:58.290 ===================================================== 00:08:58.290 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:08:58.290 ===================================================== 00:08:58.290 Controller Capabilities/Features 00:08:58.290 ================================ 00:08:58.290 Vendor ID: 1b36 00:08:58.290 Subsystem Vendor ID: 1af4 00:08:58.290 Serial Number: 12340 00:08:58.290 Model Number: QEMU NVMe Ctrl 00:08:58.290 Firmware Version: 8.0.0 00:08:58.290 Recommended Arb Burst: 6 00:08:58.290 IEEE OUI Identifier: 00 54 52 00:08:58.290 Multi-path I/O 00:08:58.290 May have multiple subsystem ports: No 00:08:58.290 May have multiple controllers: No 00:08:58.290 Associated with SR-IOV VF: No 00:08:58.290 Max Data Transfer Size: 524288 00:08:58.290 Max Number of Namespaces: 256 00:08:58.290 Max Number of I/O Queues: 64 00:08:58.290 NVMe Specification Version (VS): 1.4 00:08:58.290 NVMe Specification Version (Identify): 1.4 00:08:58.290 Maximum Queue Entries: 2048 00:08:58.290 Contiguous Queues Required: Yes 00:08:58.290 Arbitration Mechanisms Supported 00:08:58.290 Weighted Round Robin: Not Supported 00:08:58.290 Vendor Specific: Not Supported 00:08:58.290 Reset Timeout: 7500 ms 00:08:58.290 Doorbell Stride: 4 bytes 00:08:58.290 NVM Subsystem Reset: Not Supported 00:08:58.290 Command Sets Supported 00:08:58.290 NVM Command Set: Supported 00:08:58.290 Boot Partition: Not Supported 00:08:58.290 Memory Page Size Minimum: 4096 bytes 00:08:58.290 Memory Page Size Maximum: 65536 bytes 00:08:58.290 Persistent Memory Region: Not Supported 00:08:58.290 Optional Asynchronous Events Supported 00:08:58.290 Namespace Attribute Notices: Supported 00:08:58.290 Firmware Activation Notices: Not Supported 00:08:58.290 ANA Change Notices: Not Supported 00:08:58.290 PLE Aggregate Log Change Notices: Not Supported 00:08:58.290 LBA Status Info Alert Notices: Not Supported 00:08:58.290 EGE Aggregate Log Change Notices: Not Supported 00:08:58.290 Normal NVM Subsystem Shutdown event: Not Supported 00:08:58.290 Zone Descriptor Change Notices: Not Supported 00:08:58.290 Discovery Log Change Notices: Not Supported 00:08:58.290 Controller Attributes 00:08:58.291 128-bit Host Identifier: Not Supported 00:08:58.291 Non-Operational Permissive Mode: Not Supported 00:08:58.291 NVM Sets: Not Supported 00:08:58.291 Read Recovery Levels: Not Supported 00:08:58.291 Endurance Groups: Not Supported 00:08:58.291 Predictable Latency Mode: Not Supported 00:08:58.291 Traffic Based Keep ALive: Not Supported 00:08:58.291 Namespace Granularity: Not Supported 00:08:58.291 SQ Associations: Not Supported 00:08:58.291 UUID List: Not Supported 00:08:58.291 Multi-Domain Subsystem: Not Supported 00:08:58.291 Fixed Capacity Management: Not Supported 00:08:58.291 Variable Capacity Management: Not Supported 00:08:58.291 Delete Endurance Group: Not Supported 00:08:58.291 Delete NVM Set: Not Supported 00:08:58.291 Extended LBA Formats Supported: Supported 00:08:58.291 Flexible Data Placement Supported: Not Supported 00:08:58.291 00:08:58.291 Controller Memory Buffer Support 00:08:58.291 ================================ 00:08:58.291 Supported: No 00:08:58.291 00:08:58.291 Persistent Memory Region Support 00:08:58.291 ================================ 00:08:58.291 Supported: No 00:08:58.291 00:08:58.291 Admin Command Set Attributes 00:08:58.291 ============================ 00:08:58.291 Security Send/Receive: Not Supported 00:08:58.291 Format NVM: Supported 00:08:58.291 Firmware Activate/Download: Not Supported 00:08:58.291 Namespace Management: Supported 00:08:58.291 Device Self-Test: Not Supported 00:08:58.291 Directives: Supported 00:08:58.291 NVMe-MI: Not Supported 00:08:58.291 Virtualization Management: Not Supported 00:08:58.291 Doorbell Buffer Config: Supported 00:08:58.291 Get LBA Status Capability: Not Supported 00:08:58.291 Command & Feature Lockdown Capability: Not Supported 00:08:58.291 Abort Command Limit: 4 00:08:58.291 Async Event Request Limit: 4 00:08:58.291 Number of Firmware Slots: N/A 00:08:58.291 Firmware Slot 1 Read-Only: N/A 00:08:58.291 Firmware Activation Without Reset: N/A 00:08:58.291 Multiple Update Detection Support: N/A 00:08:58.291 Firmware Update Granularity: No Information Provided 00:08:58.291 Per-Namespace SMART Log: Yes 00:08:58.291 Asymmetric Namespace Access Log Page: Not Supported 00:08:58.291 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:58.291 Command Effects Log Page: Supported 00:08:58.291 Get Log Page Extended Data: Supported 00:08:58.291 Telemetry Log Pages: Not Supported 00:08:58.291 Persistent Event Log Pages: Not Supported 00:08:58.291 Supported Log Pages Log Page: May Support 00:08:58.291 Commands Supported & Effects Log Page: Not Supported 00:08:58.291 Feature Identifiers & Effects Log Page:May Support 00:08:58.291 NVMe-MI Commands & Effects Log Page: May Support 00:08:58.291 Data Area 4 for Telemetry Log: Not Supported 00:08:58.291 Error Log Page Entries Supported: 1 00:08:58.291 Keep Alive: Not Supported 00:08:58.291 00:08:58.291 NVM Command Set Attributes 00:08:58.291 ========================== 00:08:58.291 Submission Queue Entry Size 00:08:58.291 Max: 64 00:08:58.291 Min: 64 00:08:58.291 Completion Queue Entry Size 00:08:58.291 Max: 16 00:08:58.291 Min: 16 00:08:58.291 Number of Namespaces: 256 00:08:58.291 Compare Command: Supported 00:08:58.291 Write Uncorrectable Command: Not Supported 00:08:58.291 Dataset Management Command: Supported 00:08:58.291 Write Zeroes Command: Supported 00:08:58.291 Set Features Save Field: Supported 00:08:58.291 Reservations: Not Supported 00:08:58.291 Timestamp: Supported 00:08:58.291 Copy: Supported 00:08:58.291 Volatile Write Cache: Present 00:08:58.291 Atomic Write Unit (Normal): 1 00:08:58.291 Atomic Write Unit (PFail): 1 00:08:58.291 Atomic Compare & Write Unit: 1 00:08:58.291 Fused Compare & Write: Not Supported 00:08:58.291 Scatter-Gather List 00:08:58.291 SGL Command Set: Supported 00:08:58.291 SGL Keyed: Not Supported 00:08:58.291 SGL Bit Bucket Descriptor: Not Supported 00:08:58.291 SGL Metadata Pointer: Not Supported 00:08:58.291 Oversized SGL: Not Supported 00:08:58.291 SGL Metadata Address: Not Supported 00:08:58.291 SGL Offset: Not Supported 00:08:58.291 Transport SGL Data Block: Not Supported 00:08:58.291 Replay Protected Memory Block: Not Supported 00:08:58.291 00:08:58.291 Firmware Slot Information 00:08:58.291 ========================= 00:08:58.291 Active slot: 1 00:08:58.291 Slot 1 Firmware Revision: 1.0 00:08:58.291 00:08:58.291 00:08:58.291 Commands Supported and Effects 00:08:58.291 ============================== 00:08:58.291 Admin Commands 00:08:58.291 -------------- 00:08:58.291 Delete I/O Submission Queue (00h): Supported 00:08:58.291 Create I/O Submission Queue (01h): Supported 00:08:58.291 Get Log Page (02h): Supported 00:08:58.291 Delete I/O Completion Queue (04h): Supported 00:08:58.291 Create I/O Completion Queue (05h): Supported 00:08:58.291 Identify (06h): Supported 00:08:58.291 Abort (08h): Supported 00:08:58.291 Set Features (09h): Supported 00:08:58.291 Get Features (0Ah): Supported 00:08:58.291 Asynchronous Event Request (0Ch): Supported 00:08:58.291 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:58.291 Directive Send (19h): Supported 00:08:58.291 Directive Receive (1Ah): Supported 00:08:58.291 Virtualization Management (1Ch): Supported 00:08:58.291 Doorbell Buffer Config (7Ch): Supported 00:08:58.291 Format NVM (80h): Supported LBA-Change 00:08:58.291 I/O Commands 00:08:58.291 ------------ 00:08:58.291 Flush (00h): Supported LBA-Change 00:08:58.291 Write (01h): Supported LBA-Change 00:08:58.291 Read (02h): Supported 00:08:58.291 Compare (05h): Supported 00:08:58.291 Write Zeroes (08h): Supported LBA-Change 00:08:58.291 Dataset Management (09h): Supported LBA-Change 00:08:58.291 Unknown (0Ch): Supported 00:08:58.291 Unknown (12h): Supported 00:08:58.291 Copy (19h): Supported LBA-Change 00:08:58.291 Unknown (1Dh): Supported LBA-Change 00:08:58.291 00:08:58.291 Error Log 00:08:58.291 ========= 00:08:58.291 00:08:58.291 Arbitration 00:08:58.291 =========== 00:08:58.291 Arbitration Burst: no limit 00:08:58.291 00:08:58.291 Power Management 00:08:58.291 ================ 00:08:58.291 Number of Power States: 1 00:08:58.291 Current Power State: Power State #0 00:08:58.291 Power State #0: 00:08:58.291 Max Power: 25.00 W 00:08:58.291 Non-Operational State: Operational 00:08:58.291 Entry Latency: 16 microseconds 00:08:58.291 Exit Latency: 4 microseconds 00:08:58.291 Relative Read Throughput: 0 00:08:58.291 Relative Read Latency: 0 00:08:58.291 Relative Write Throughput: 0 00:08:58.291 Relative Write Latency: 0 00:08:58.291 Idle Power: Not Reported 00:08:58.291 Active Power: Not Reported 00:08:58.291 Non-Operational Permissive Mode: Not Supported 00:08:58.291 00:08:58.291 Health Information 00:08:58.291 ================== 00:08:58.291 Critical Warnings: 00:08:58.291 Available Spare Space: OK 00:08:58.291 Temperature: OK 00:08:58.291 Device Reliability: OK 00:08:58.291 Read Only: No 00:08:58.291 Volatile Memory Backup: OK 00:08:58.291 Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.291 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:58.291 Available Spare: 0% 00:08:58.291 Available Spare Threshold: 0% 00:08:58.291 Life Percentage Used: 0% 00:08:58.291 Data Units Read: 1808 00:08:58.291 Data Units Written: 835 00:08:58.291 Host Read Commands: 86939 00:08:58.291 Host Write Commands: 43173 00:08:58.291 Controller Busy Time: 0 minutes 00:08:58.291 Power Cycles: 0 00:08:58.291 Power On Hours: 0 hours 00:08:58.291 Unsafe Shutdowns: 0 00:08:58.291 Unrecoverable Media Errors: 0 00:08:58.291 Lifetime Error Log Entries: 0 00:08:58.291 Warning Temperature Time: 0 minutes 00:08:58.291 Critical Temperature Time: 0 minutes 00:08:58.291 00:08:58.291 Number of Queues 00:08:58.291 ================ 00:08:58.291 Number of I/O Submission Queues: 64 00:08:58.291 Number of I/O Completion Queues: 64 00:08:58.291 00:08:58.291 ZNS Specific Controller Data 00:08:58.291 ============================ 00:08:58.291 Zone Append Size Limit: 0 00:08:58.291 00:08:58.291 00:08:58.291 Active Namespaces 00:08:58.291 ================= 00:08:58.291 Namespace ID:1 00:08:58.291 Error Recovery Timeout: Unlimited 00:08:58.291 Command Set Identifier: NVM (00h) 00:08:58.291 Deallocate: Supported 00:08:58.291 Deallocated/Unwritten Error: Supported 00:08:58.291 Deallocated Read Value: All 0x00 00:08:58.291 Deallocate in Write Zeroes: Not Supported 00:08:58.291 Deallocated Guard Field: 0xFFFF 00:08:58.291 Flush: Supported 00:08:58.291 Reservation: Not Supported 00:08:58.291 Metadata Transferred as: Separate Metadata Buffer 00:08:58.291 Namespace Sharing Capabilities: Private 00:08:58.292 Size (in LBAs): 1548666 (5GiB) 00:08:58.292 Capacity (in LBAs): 1548666 (5GiB) 00:08:58.292 Utilization (in LBAs): 1548666 (5GiB) 00:08:58.292 Thin Provisioning: Not Supported 00:08:58.292 Per-NS Atomic Units: No 00:08:58.292 Maximum Single Source Range Length: 128 00:08:58.292 Maximum Copy Length: 128 00:08:58.292 Maximum Source Range Count: 128 00:08:58.292 NGUID/EUI64 Never Reused: No 00:08:58.292 Namespace Write Protected: No 00:08:58.292 Number of LBA Formats: 8 00:08:58.292 Current LBA Format: LBA Format #07 00:08:58.292 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.292 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.292 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.292 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.292 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.292 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.292 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.292 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.292 00:08:58.292 10:36:28 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:58.292 10:36:28 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:08:58.551 ===================================================== 00:08:58.551 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:08:58.551 ===================================================== 00:08:58.551 Controller Capabilities/Features 00:08:58.551 ================================ 00:08:58.551 Vendor ID: 1b36 00:08:58.551 Subsystem Vendor ID: 1af4 00:08:58.551 Serial Number: 12341 00:08:58.551 Model Number: QEMU NVMe Ctrl 00:08:58.551 Firmware Version: 8.0.0 00:08:58.551 Recommended Arb Burst: 6 00:08:58.551 IEEE OUI Identifier: 00 54 52 00:08:58.551 Multi-path I/O 00:08:58.551 May have multiple subsystem ports: No 00:08:58.551 May have multiple controllers: No 00:08:58.551 Associated with SR-IOV VF: No 00:08:58.551 Max Data Transfer Size: 524288 00:08:58.551 Max Number of Namespaces: 256 00:08:58.551 Max Number of I/O Queues: 64 00:08:58.551 NVMe Specification Version (VS): 1.4 00:08:58.551 NVMe Specification Version (Identify): 1.4 00:08:58.551 Maximum Queue Entries: 2048 00:08:58.551 Contiguous Queues Required: Yes 00:08:58.551 Arbitration Mechanisms Supported 00:08:58.551 Weighted Round Robin: Not Supported 00:08:58.551 Vendor Specific: Not Supported 00:08:58.551 Reset Timeout: 7500 ms 00:08:58.551 Doorbell Stride: 4 bytes 00:08:58.551 NVM Subsystem Reset: Not Supported 00:08:58.551 Command Sets Supported 00:08:58.551 NVM Command Set: Supported 00:08:58.551 Boot Partition: Not Supported 00:08:58.551 Memory Page Size Minimum: 4096 bytes 00:08:58.551 Memory Page Size Maximum: 65536 bytes 00:08:58.551 Persistent Memory Region: Not Supported 00:08:58.551 Optional Asynchronous Events Supported 00:08:58.551 Namespace Attribute Notices: Supported 00:08:58.551 Firmware Activation Notices: Not Supported 00:08:58.551 ANA Change Notices: Not Supported 00:08:58.551 PLE Aggregate Log Change Notices: Not Supported 00:08:58.551 LBA Status Info Alert Notices: Not Supported 00:08:58.551 EGE Aggregate Log Change Notices: Not Supported 00:08:58.551 Normal NVM Subsystem Shutdown event: Not Supported 00:08:58.551 Zone Descriptor Change Notices: Not Supported 00:08:58.551 Discovery Log Change Notices: Not Supported 00:08:58.551 Controller Attributes 00:08:58.551 128-bit Host Identifier: Not Supported 00:08:58.551 Non-Operational Permissive Mode: Not Supported 00:08:58.551 NVM Sets: Not Supported 00:08:58.551 Read Recovery Levels: Not Supported 00:08:58.551 Endurance Groups: Not Supported 00:08:58.551 Predictable Latency Mode: Not Supported 00:08:58.551 Traffic Based Keep ALive: Not Supported 00:08:58.551 Namespace Granularity: Not Supported 00:08:58.551 SQ Associations: Not Supported 00:08:58.551 UUID List: Not Supported 00:08:58.551 Multi-Domain Subsystem: Not Supported 00:08:58.551 Fixed Capacity Management: Not Supported 00:08:58.551 Variable Capacity Management: Not Supported 00:08:58.551 Delete Endurance Group: Not Supported 00:08:58.551 Delete NVM Set: Not Supported 00:08:58.551 Extended LBA Formats Supported: Supported 00:08:58.551 Flexible Data Placement Supported: Not Supported 00:08:58.551 00:08:58.551 Controller Memory Buffer Support 00:08:58.551 ================================ 00:08:58.551 Supported: No 00:08:58.551 00:08:58.551 Persistent Memory Region Support 00:08:58.551 ================================ 00:08:58.551 Supported: No 00:08:58.551 00:08:58.551 Admin Command Set Attributes 00:08:58.551 ============================ 00:08:58.551 Security Send/Receive: Not Supported 00:08:58.551 Format NVM: Supported 00:08:58.552 Firmware Activate/Download: Not Supported 00:08:58.552 Namespace Management: Supported 00:08:58.552 Device Self-Test: Not Supported 00:08:58.552 Directives: Supported 00:08:58.552 NVMe-MI: Not Supported 00:08:58.552 Virtualization Management: Not Supported 00:08:58.552 Doorbell Buffer Config: Supported 00:08:58.552 Get LBA Status Capability: Not Supported 00:08:58.552 Command & Feature Lockdown Capability: Not Supported 00:08:58.552 Abort Command Limit: 4 00:08:58.552 Async Event Request Limit: 4 00:08:58.552 Number of Firmware Slots: N/A 00:08:58.552 Firmware Slot 1 Read-Only: N/A 00:08:58.552 Firmware Activation Without Reset: N/A 00:08:58.552 Multiple Update Detection Support: N/A 00:08:58.552 Firmware Update Granularity: No Information Provided 00:08:58.552 Per-Namespace SMART Log: Yes 00:08:58.552 Asymmetric Namespace Access Log Page: Not Supported 00:08:58.552 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:58.552 Command Effects Log Page: Supported 00:08:58.552 Get Log Page Extended Data: Supported 00:08:58.552 Telemetry Log Pages: Not Supported 00:08:58.552 Persistent Event Log Pages: Not Supported 00:08:58.552 Supported Log Pages Log Page: May Support 00:08:58.552 Commands Supported & Effects Log Page: Not Supported 00:08:58.552 Feature Identifiers & Effects Log Page:May Support 00:08:58.552 NVMe-MI Commands & Effects Log Page: May Support 00:08:58.552 Data Area 4 for Telemetry Log: Not Supported 00:08:58.552 Error Log Page Entries Supported: 1 00:08:58.552 Keep Alive: Not Supported 00:08:58.552 00:08:58.552 NVM Command Set Attributes 00:08:58.552 ========================== 00:08:58.552 Submission Queue Entry Size 00:08:58.552 Max: 64 00:08:58.552 Min: 64 00:08:58.552 Completion Queue Entry Size 00:08:58.552 Max: 16 00:08:58.552 Min: 16 00:08:58.552 Number of Namespaces: 256 00:08:58.552 Compare Command: Supported 00:08:58.552 Write Uncorrectable Command: Not Supported 00:08:58.552 Dataset Management Command: Supported 00:08:58.552 Write Zeroes Command: Supported 00:08:58.552 Set Features Save Field: Supported 00:08:58.552 Reservations: Not Supported 00:08:58.552 Timestamp: Supported 00:08:58.552 Copy: Supported 00:08:58.552 Volatile Write Cache: Present 00:08:58.552 Atomic Write Unit (Normal): 1 00:08:58.552 Atomic Write Unit (PFail): 1 00:08:58.552 Atomic Compare & Write Unit: 1 00:08:58.552 Fused Compare & Write: Not Supported 00:08:58.552 Scatter-Gather List 00:08:58.552 SGL Command Set: Supported 00:08:58.552 SGL Keyed: Not Supported 00:08:58.552 SGL Bit Bucket Descriptor: Not Supported 00:08:58.552 SGL Metadata Pointer: Not Supported 00:08:58.552 Oversized SGL: Not Supported 00:08:58.552 SGL Metadata Address: Not Supported 00:08:58.552 SGL Offset: Not Supported 00:08:58.552 Transport SGL Data Block: Not Supported 00:08:58.552 Replay Protected Memory Block: Not Supported 00:08:58.552 00:08:58.552 Firmware Slot Information 00:08:58.552 ========================= 00:08:58.552 Active slot: 1 00:08:58.552 Slot 1 Firmware Revision: 1.0 00:08:58.552 00:08:58.552 00:08:58.552 Commands Supported and Effects 00:08:58.552 ============================== 00:08:58.552 Admin Commands 00:08:58.552 -------------- 00:08:58.552 Delete I/O Submission Queue (00h): Supported 00:08:58.552 Create I/O Submission Queue (01h): Supported 00:08:58.552 Get Log Page (02h): Supported 00:08:58.552 Delete I/O Completion Queue (04h): Supported 00:08:58.552 Create I/O Completion Queue (05h): Supported 00:08:58.552 Identify (06h): Supported 00:08:58.552 Abort (08h): Supported 00:08:58.552 Set Features (09h): Supported 00:08:58.552 Get Features (0Ah): Supported 00:08:58.552 Asynchronous Event Request (0Ch): Supported 00:08:58.552 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:58.552 Directive Send (19h): Supported 00:08:58.552 Directive Receive (1Ah): Supported 00:08:58.552 Virtualization Management (1Ch): Supported 00:08:58.552 Doorbell Buffer Config (7Ch): Supported 00:08:58.552 Format NVM (80h): Supported LBA-Change 00:08:58.552 I/O Commands 00:08:58.552 ------------ 00:08:58.552 Flush (00h): Supported LBA-Change 00:08:58.552 Write (01h): Supported LBA-Change 00:08:58.552 Read (02h): Supported 00:08:58.552 Compare (05h): Supported 00:08:58.552 Write Zeroes (08h): Supported LBA-Change 00:08:58.552 Dataset Management (09h): Supported LBA-Change 00:08:58.552 Unknown (0Ch): Supported 00:08:58.552 Unknown (12h): Supported 00:08:58.552 Copy (19h): Supported LBA-Change 00:08:58.552 Unknown (1Dh): Supported LBA-Change 00:08:58.552 00:08:58.552 Error Log 00:08:58.552 ========= 00:08:58.552 00:08:58.552 Arbitration 00:08:58.552 =========== 00:08:58.552 Arbitration Burst: no limit 00:08:58.552 00:08:58.552 Power Management 00:08:58.552 ================ 00:08:58.552 Number of Power States: 1 00:08:58.552 Current Power State: Power State #0 00:08:58.552 Power State #0: 00:08:58.552 Max Power: 25.00 W 00:08:58.552 Non-Operational State: Operational 00:08:58.552 Entry Latency: 16 microseconds 00:08:58.552 Exit Latency: 4 microseconds 00:08:58.552 Relative Read Throughput: 0 00:08:58.552 Relative Read Latency: 0 00:08:58.552 Relative Write Throughput: 0 00:08:58.552 Relative Write Latency: 0 00:08:58.552 Idle Power: Not Reported 00:08:58.552 Active Power: Not Reported 00:08:58.552 Non-Operational Permissive Mode: Not Supported 00:08:58.552 00:08:58.552 Health Information 00:08:58.552 ================== 00:08:58.552 Critical Warnings: 00:08:58.552 Available Spare Space: OK 00:08:58.552 Temperature: OK 00:08:58.552 Device Reliability: OK 00:08:58.552 Read Only: No 00:08:58.552 Volatile Memory Backup: OK 00:08:58.552 Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.552 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:58.552 Available Spare: 0% 00:08:58.552 Available Spare Threshold: 0% 00:08:58.552 Life Percentage Used: 0% 00:08:58.552 Data Units Read: 1219 00:08:58.552 Data Units Written: 566 00:08:58.552 Host Read Commands: 59720 00:08:58.552 Host Write Commands: 29375 00:08:58.552 Controller Busy Time: 0 minutes 00:08:58.552 Power Cycles: 0 00:08:58.552 Power On Hours: 0 hours 00:08:58.552 Unsafe Shutdowns: 0 00:08:58.552 Unrecoverable Media Errors: 0 00:08:58.552 Lifetime Error Log Entries: 0 00:08:58.552 Warning Temperature Time: 0 minutes 00:08:58.552 Critical Temperature Time: 0 minutes 00:08:58.552 00:08:58.552 Number of Queues 00:08:58.552 ================ 00:08:58.552 Number of I/O Submission Queues: 64 00:08:58.552 Number of I/O Completion Queues: 64 00:08:58.552 00:08:58.552 ZNS Specific Controller Data 00:08:58.552 ============================ 00:08:58.552 Zone Append Size Limit: 0 00:08:58.552 00:08:58.552 00:08:58.552 Active Namespaces 00:08:58.552 ================= 00:08:58.552 Namespace ID:1 00:08:58.552 Error Recovery Timeout: Unlimited 00:08:58.552 Command Set Identifier: NVM (00h) 00:08:58.552 Deallocate: Supported 00:08:58.552 Deallocated/Unwritten Error: Supported 00:08:58.552 Deallocated Read Value: All 0x00 00:08:58.552 Deallocate in Write Zeroes: Not Supported 00:08:58.552 Deallocated Guard Field: 0xFFFF 00:08:58.552 Flush: Supported 00:08:58.552 Reservation: Not Supported 00:08:58.552 Namespace Sharing Capabilities: Private 00:08:58.552 Size (in LBAs): 1310720 (5GiB) 00:08:58.552 Capacity (in LBAs): 1310720 (5GiB) 00:08:58.552 Utilization (in LBAs): 1310720 (5GiB) 00:08:58.552 Thin Provisioning: Not Supported 00:08:58.552 Per-NS Atomic Units: No 00:08:58.552 Maximum Single Source Range Length: 128 00:08:58.552 Maximum Copy Length: 128 00:08:58.552 Maximum Source Range Count: 128 00:08:58.552 NGUID/EUI64 Never Reused: No 00:08:58.552 Namespace Write Protected: No 00:08:58.552 Number of LBA Formats: 8 00:08:58.552 Current LBA Format: LBA Format #04 00:08:58.552 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.552 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.552 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.552 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.552 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.552 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.552 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.552 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.552 00:08:58.552 10:36:29 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:58.552 10:36:29 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:08:58.811 ===================================================== 00:08:58.811 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:08:58.811 ===================================================== 00:08:58.811 Controller Capabilities/Features 00:08:58.811 ================================ 00:08:58.811 Vendor ID: 1b36 00:08:58.811 Subsystem Vendor ID: 1af4 00:08:58.811 Serial Number: 12342 00:08:58.811 Model Number: QEMU NVMe Ctrl 00:08:58.811 Firmware Version: 8.0.0 00:08:58.811 Recommended Arb Burst: 6 00:08:58.811 IEEE OUI Identifier: 00 54 52 00:08:58.811 Multi-path I/O 00:08:58.811 May have multiple subsystem ports: No 00:08:58.811 May have multiple controllers: No 00:08:58.811 Associated with SR-IOV VF: No 00:08:58.811 Max Data Transfer Size: 524288 00:08:58.811 Max Number of Namespaces: 256 00:08:58.811 Max Number of I/O Queues: 64 00:08:58.811 NVMe Specification Version (VS): 1.4 00:08:58.811 NVMe Specification Version (Identify): 1.4 00:08:58.811 Maximum Queue Entries: 2048 00:08:58.811 Contiguous Queues Required: Yes 00:08:58.811 Arbitration Mechanisms Supported 00:08:58.812 Weighted Round Robin: Not Supported 00:08:58.812 Vendor Specific: Not Supported 00:08:58.812 Reset Timeout: 7500 ms 00:08:58.812 Doorbell Stride: 4 bytes 00:08:58.812 NVM Subsystem Reset: Not Supported 00:08:58.812 Command Sets Supported 00:08:58.812 NVM Command Set: Supported 00:08:58.812 Boot Partition: Not Supported 00:08:58.812 Memory Page Size Minimum: 4096 bytes 00:08:58.812 Memory Page Size Maximum: 65536 bytes 00:08:58.812 Persistent Memory Region: Not Supported 00:08:58.812 Optional Asynchronous Events Supported 00:08:58.812 Namespace Attribute Notices: Supported 00:08:58.812 Firmware Activation Notices: Not Supported 00:08:58.812 ANA Change Notices: Not Supported 00:08:58.812 PLE Aggregate Log Change Notices: Not Supported 00:08:58.812 LBA Status Info Alert Notices: Not Supported 00:08:58.812 EGE Aggregate Log Change Notices: Not Supported 00:08:58.812 Normal NVM Subsystem Shutdown event: Not Supported 00:08:58.812 Zone Descriptor Change Notices: Not Supported 00:08:58.812 Discovery Log Change Notices: Not Supported 00:08:58.812 Controller Attributes 00:08:58.812 128-bit Host Identifier: Not Supported 00:08:58.812 Non-Operational Permissive Mode: Not Supported 00:08:58.812 NVM Sets: Not Supported 00:08:58.812 Read Recovery Levels: Not Supported 00:08:58.812 Endurance Groups: Not Supported 00:08:58.812 Predictable Latency Mode: Not Supported 00:08:58.812 Traffic Based Keep ALive: Not Supported 00:08:58.812 Namespace Granularity: Not Supported 00:08:58.812 SQ Associations: Not Supported 00:08:58.812 UUID List: Not Supported 00:08:58.812 Multi-Domain Subsystem: Not Supported 00:08:58.812 Fixed Capacity Management: Not Supported 00:08:58.812 Variable Capacity Management: Not Supported 00:08:58.812 Delete Endurance Group: Not Supported 00:08:58.812 Delete NVM Set: Not Supported 00:08:58.812 Extended LBA Formats Supported: Supported 00:08:58.812 Flexible Data Placement Supported: Not Supported 00:08:58.812 00:08:58.812 Controller Memory Buffer Support 00:08:58.812 ================================ 00:08:58.812 Supported: No 00:08:58.812 00:08:58.812 Persistent Memory Region Support 00:08:58.812 ================================ 00:08:58.812 Supported: No 00:08:58.812 00:08:58.812 Admin Command Set Attributes 00:08:58.812 ============================ 00:08:58.812 Security Send/Receive: Not Supported 00:08:58.812 Format NVM: Supported 00:08:58.812 Firmware Activate/Download: Not Supported 00:08:58.812 Namespace Management: Supported 00:08:58.812 Device Self-Test: Not Supported 00:08:58.812 Directives: Supported 00:08:58.812 NVMe-MI: Not Supported 00:08:58.812 Virtualization Management: Not Supported 00:08:58.812 Doorbell Buffer Config: Supported 00:08:58.812 Get LBA Status Capability: Not Supported 00:08:58.812 Command & Feature Lockdown Capability: Not Supported 00:08:58.812 Abort Command Limit: 4 00:08:58.812 Async Event Request Limit: 4 00:08:58.812 Number of Firmware Slots: N/A 00:08:58.812 Firmware Slot 1 Read-Only: N/A 00:08:58.812 Firmware Activation Without Reset: N/A 00:08:58.812 Multiple Update Detection Support: N/A 00:08:58.812 Firmware Update Granularity: No Information Provided 00:08:58.812 Per-Namespace SMART Log: Yes 00:08:58.812 Asymmetric Namespace Access Log Page: Not Supported 00:08:58.812 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:58.812 Command Effects Log Page: Supported 00:08:58.812 Get Log Page Extended Data: Supported 00:08:58.812 Telemetry Log Pages: Not Supported 00:08:58.812 Persistent Event Log Pages: Not Supported 00:08:58.812 Supported Log Pages Log Page: May Support 00:08:58.812 Commands Supported & Effects Log Page: Not Supported 00:08:58.812 Feature Identifiers & Effects Log Page:May Support 00:08:58.812 NVMe-MI Commands & Effects Log Page: May Support 00:08:58.812 Data Area 4 for Telemetry Log: Not Supported 00:08:58.812 Error Log Page Entries Supported: 1 00:08:58.812 Keep Alive: Not Supported 00:08:58.812 00:08:58.812 NVM Command Set Attributes 00:08:58.812 ========================== 00:08:58.812 Submission Queue Entry Size 00:08:58.812 Max: 64 00:08:58.812 Min: 64 00:08:58.812 Completion Queue Entry Size 00:08:58.812 Max: 16 00:08:58.812 Min: 16 00:08:58.812 Number of Namespaces: 256 00:08:58.812 Compare Command: Supported 00:08:58.812 Write Uncorrectable Command: Not Supported 00:08:58.812 Dataset Management Command: Supported 00:08:58.812 Write Zeroes Command: Supported 00:08:58.812 Set Features Save Field: Supported 00:08:58.812 Reservations: Not Supported 00:08:58.812 Timestamp: Supported 00:08:58.812 Copy: Supported 00:08:58.812 Volatile Write Cache: Present 00:08:58.812 Atomic Write Unit (Normal): 1 00:08:58.812 Atomic Write Unit (PFail): 1 00:08:58.812 Atomic Compare & Write Unit: 1 00:08:58.812 Fused Compare & Write: Not Supported 00:08:58.812 Scatter-Gather List 00:08:58.812 SGL Command Set: Supported 00:08:58.812 SGL Keyed: Not Supported 00:08:58.812 SGL Bit Bucket Descriptor: Not Supported 00:08:58.812 SGL Metadata Pointer: Not Supported 00:08:58.812 Oversized SGL: Not Supported 00:08:58.812 SGL Metadata Address: Not Supported 00:08:58.812 SGL Offset: Not Supported 00:08:58.812 Transport SGL Data Block: Not Supported 00:08:58.812 Replay Protected Memory Block: Not Supported 00:08:58.812 00:08:58.812 Firmware Slot Information 00:08:58.812 ========================= 00:08:58.812 Active slot: 1 00:08:58.812 Slot 1 Firmware Revision: 1.0 00:08:58.812 00:08:58.812 00:08:58.812 Commands Supported and Effects 00:08:58.812 ============================== 00:08:58.812 Admin Commands 00:08:58.812 -------------- 00:08:58.812 Delete I/O Submission Queue (00h): Supported 00:08:58.812 Create I/O Submission Queue (01h): Supported 00:08:58.812 Get Log Page (02h): Supported 00:08:58.812 Delete I/O Completion Queue (04h): Supported 00:08:58.812 Create I/O Completion Queue (05h): Supported 00:08:58.812 Identify (06h): Supported 00:08:58.812 Abort (08h): Supported 00:08:58.812 Set Features (09h): Supported 00:08:58.812 Get Features (0Ah): Supported 00:08:58.812 Asynchronous Event Request (0Ch): Supported 00:08:58.812 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:58.812 Directive Send (19h): Supported 00:08:58.812 Directive Receive (1Ah): Supported 00:08:58.812 Virtualization Management (1Ch): Supported 00:08:58.812 Doorbell Buffer Config (7Ch): Supported 00:08:58.812 Format NVM (80h): Supported LBA-Change 00:08:58.812 I/O Commands 00:08:58.812 ------------ 00:08:58.812 Flush (00h): Supported LBA-Change 00:08:58.812 Write (01h): Supported LBA-Change 00:08:58.812 Read (02h): Supported 00:08:58.812 Compare (05h): Supported 00:08:58.812 Write Zeroes (08h): Supported LBA-Change 00:08:58.812 Dataset Management (09h): Supported LBA-Change 00:08:58.812 Unknown (0Ch): Supported 00:08:58.812 Unknown (12h): Supported 00:08:58.812 Copy (19h): Supported LBA-Change 00:08:58.812 Unknown (1Dh): Supported LBA-Change 00:08:58.812 00:08:58.812 Error Log 00:08:58.812 ========= 00:08:58.812 00:08:58.812 Arbitration 00:08:58.812 =========== 00:08:58.812 Arbitration Burst: no limit 00:08:58.812 00:08:58.812 Power Management 00:08:58.812 ================ 00:08:58.812 Number of Power States: 1 00:08:58.812 Current Power State: Power State #0 00:08:58.812 Power State #0: 00:08:58.812 Max Power: 25.00 W 00:08:58.812 Non-Operational State: Operational 00:08:58.812 Entry Latency: 16 microseconds 00:08:58.812 Exit Latency: 4 microseconds 00:08:58.812 Relative Read Throughput: 0 00:08:58.812 Relative Read Latency: 0 00:08:58.812 Relative Write Throughput: 0 00:08:58.812 Relative Write Latency: 0 00:08:58.812 Idle Power: Not Reported 00:08:58.812 Active Power: Not Reported 00:08:58.812 Non-Operational Permissive Mode: Not Supported 00:08:58.812 00:08:58.812 Health Information 00:08:58.812 ================== 00:08:58.812 Critical Warnings: 00:08:58.812 Available Spare Space: OK 00:08:58.812 Temperature: OK 00:08:58.812 Device Reliability: OK 00:08:58.812 Read Only: No 00:08:58.812 Volatile Memory Backup: OK 00:08:58.812 Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.812 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:58.812 Available Spare: 0% 00:08:58.812 Available Spare Threshold: 0% 00:08:58.812 Life Percentage Used: 0% 00:08:58.812 Data Units Read: 3768 00:08:58.812 Data Units Written: 1742 00:08:58.812 Host Read Commands: 180385 00:08:58.812 Host Write Commands: 88523 00:08:58.812 Controller Busy Time: 0 minutes 00:08:58.812 Power Cycles: 0 00:08:58.812 Power On Hours: 0 hours 00:08:58.813 Unsafe Shutdowns: 0 00:08:58.813 Unrecoverable Media Errors: 0 00:08:58.813 Lifetime Error Log Entries: 0 00:08:58.813 Warning Temperature Time: 0 minutes 00:08:58.813 Critical Temperature Time: 0 minutes 00:08:58.813 00:08:58.813 Number of Queues 00:08:58.813 ================ 00:08:58.813 Number of I/O Submission Queues: 64 00:08:58.813 Number of I/O Completion Queues: 64 00:08:58.813 00:08:58.813 ZNS Specific Controller Data 00:08:58.813 ============================ 00:08:58.813 Zone Append Size Limit: 0 00:08:58.813 00:08:58.813 00:08:58.813 Active Namespaces 00:08:58.813 ================= 00:08:58.813 Namespace ID:1 00:08:58.813 Error Recovery Timeout: Unlimited 00:08:58.813 Command Set Identifier: NVM (00h) 00:08:58.813 Deallocate: Supported 00:08:58.813 Deallocated/Unwritten Error: Supported 00:08:58.813 Deallocated Read Value: All 0x00 00:08:58.813 Deallocate in Write Zeroes: Not Supported 00:08:58.813 Deallocated Guard Field: 0xFFFF 00:08:58.813 Flush: Supported 00:08:58.813 Reservation: Not Supported 00:08:58.813 Namespace Sharing Capabilities: Private 00:08:58.813 Size (in LBAs): 1048576 (4GiB) 00:08:58.813 Capacity (in LBAs): 1048576 (4GiB) 00:08:58.813 Utilization (in LBAs): 1048576 (4GiB) 00:08:58.813 Thin Provisioning: Not Supported 00:08:58.813 Per-NS Atomic Units: No 00:08:58.813 Maximum Single Source Range Length: 128 00:08:58.813 Maximum Copy Length: 128 00:08:58.813 Maximum Source Range Count: 128 00:08:58.813 NGUID/EUI64 Never Reused: No 00:08:58.813 Namespace Write Protected: No 00:08:58.813 Number of LBA Formats: 8 00:08:58.813 Current LBA Format: LBA Format #04 00:08:58.813 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.813 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.813 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.813 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.813 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.813 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.813 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.813 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.813 00:08:58.813 Namespace ID:2 00:08:58.813 Error Recovery Timeout: Unlimited 00:08:58.813 Command Set Identifier: NVM (00h) 00:08:58.813 Deallocate: Supported 00:08:58.813 Deallocated/Unwritten Error: Supported 00:08:58.813 Deallocated Read Value: All 0x00 00:08:58.813 Deallocate in Write Zeroes: Not Supported 00:08:58.813 Deallocated Guard Field: 0xFFFF 00:08:58.813 Flush: Supported 00:08:58.813 Reservation: Not Supported 00:08:58.813 Namespace Sharing Capabilities: Private 00:08:58.813 Size (in LBAs): 1048576 (4GiB) 00:08:58.813 Capacity (in LBAs): 1048576 (4GiB) 00:08:58.813 Utilization (in LBAs): 1048576 (4GiB) 00:08:58.813 Thin Provisioning: Not Supported 00:08:58.813 Per-NS Atomic Units: No 00:08:58.813 Maximum Single Source Range Length: 128 00:08:58.813 Maximum Copy Length: 128 00:08:58.813 Maximum Source Range Count: 128 00:08:58.813 NGUID/EUI64 Never Reused: No 00:08:58.813 Namespace Write Protected: No 00:08:58.813 Number of LBA Formats: 8 00:08:58.813 Current LBA Format: LBA Format #04 00:08:58.813 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.813 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.813 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.813 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.813 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.813 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.813 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.813 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.813 00:08:58.813 Namespace ID:3 00:08:58.813 Error Recovery Timeout: Unlimited 00:08:58.813 Command Set Identifier: NVM (00h) 00:08:58.813 Deallocate: Supported 00:08:58.813 Deallocated/Unwritten Error: Supported 00:08:58.813 Deallocated Read Value: All 0x00 00:08:58.813 Deallocate in Write Zeroes: Not Supported 00:08:58.813 Deallocated Guard Field: 0xFFFF 00:08:58.813 Flush: Supported 00:08:58.813 Reservation: Not Supported 00:08:58.813 Namespace Sharing Capabilities: Private 00:08:58.813 Size (in LBAs): 1048576 (4GiB) 00:08:58.813 Capacity (in LBAs): 1048576 (4GiB) 00:08:58.813 Utilization (in LBAs): 1048576 (4GiB) 00:08:58.813 Thin Provisioning: Not Supported 00:08:58.813 Per-NS Atomic Units: No 00:08:58.813 Maximum Single Source Range Length: 128 00:08:58.813 Maximum Copy Length: 128 00:08:58.813 Maximum Source Range Count: 128 00:08:58.813 NGUID/EUI64 Never Reused: No 00:08:58.813 Namespace Write Protected: No 00:08:58.813 Number of LBA Formats: 8 00:08:58.813 Current LBA Format: LBA Format #04 00:08:58.813 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:58.813 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:58.813 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:58.813 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:58.813 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:58.813 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:58.813 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:58.813 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:58.813 00:08:58.813 10:36:29 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:58.813 10:36:29 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:08:59.073 ===================================================== 00:08:59.073 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:08:59.073 ===================================================== 00:08:59.073 Controller Capabilities/Features 00:08:59.073 ================================ 00:08:59.073 Vendor ID: 1b36 00:08:59.073 Subsystem Vendor ID: 1af4 00:08:59.073 Serial Number: 12343 00:08:59.073 Model Number: QEMU NVMe Ctrl 00:08:59.073 Firmware Version: 8.0.0 00:08:59.073 Recommended Arb Burst: 6 00:08:59.073 IEEE OUI Identifier: 00 54 52 00:08:59.073 Multi-path I/O 00:08:59.073 May have multiple subsystem ports: No 00:08:59.073 May have multiple controllers: Yes 00:08:59.073 Associated with SR-IOV VF: No 00:08:59.073 Max Data Transfer Size: 524288 00:08:59.073 Max Number of Namespaces: 256 00:08:59.073 Max Number of I/O Queues: 64 00:08:59.073 NVMe Specification Version (VS): 1.4 00:08:59.073 NVMe Specification Version (Identify): 1.4 00:08:59.073 Maximum Queue Entries: 2048 00:08:59.073 Contiguous Queues Required: Yes 00:08:59.073 Arbitration Mechanisms Supported 00:08:59.073 Weighted Round Robin: Not Supported 00:08:59.073 Vendor Specific: Not Supported 00:08:59.073 Reset Timeout: 7500 ms 00:08:59.073 Doorbell Stride: 4 bytes 00:08:59.073 NVM Subsystem Reset: Not Supported 00:08:59.073 Command Sets Supported 00:08:59.073 NVM Command Set: Supported 00:08:59.073 Boot Partition: Not Supported 00:08:59.073 Memory Page Size Minimum: 4096 bytes 00:08:59.073 Memory Page Size Maximum: 65536 bytes 00:08:59.073 Persistent Memory Region: Not Supported 00:08:59.073 Optional Asynchronous Events Supported 00:08:59.073 Namespace Attribute Notices: Supported 00:08:59.073 Firmware Activation Notices: Not Supported 00:08:59.073 ANA Change Notices: Not Supported 00:08:59.073 PLE Aggregate Log Change Notices: Not Supported 00:08:59.073 LBA Status Info Alert Notices: Not Supported 00:08:59.073 EGE Aggregate Log Change Notices: Not Supported 00:08:59.073 Normal NVM Subsystem Shutdown event: Not Supported 00:08:59.073 Zone Descriptor Change Notices: Not Supported 00:08:59.073 Discovery Log Change Notices: Not Supported 00:08:59.073 Controller Attributes 00:08:59.073 128-bit Host Identifier: Not Supported 00:08:59.073 Non-Operational Permissive Mode: Not Supported 00:08:59.073 NVM Sets: Not Supported 00:08:59.073 Read Recovery Levels: Not Supported 00:08:59.073 Endurance Groups: Supported 00:08:59.073 Predictable Latency Mode: Not Supported 00:08:59.073 Traffic Based Keep ALive: Not Supported 00:08:59.073 Namespace Granularity: Not Supported 00:08:59.073 SQ Associations: Not Supported 00:08:59.073 UUID List: Not Supported 00:08:59.073 Multi-Domain Subsystem: Not Supported 00:08:59.073 Fixed Capacity Management: Not Supported 00:08:59.073 Variable Capacity Management: Not Supported 00:08:59.073 Delete Endurance Group: Not Supported 00:08:59.073 Delete NVM Set: Not Supported 00:08:59.073 Extended LBA Formats Supported: Supported 00:08:59.073 Flexible Data Placement Supported: Supported 00:08:59.073 00:08:59.073 Controller Memory Buffer Support 00:08:59.073 ================================ 00:08:59.073 Supported: No 00:08:59.073 00:08:59.073 Persistent Memory Region Support 00:08:59.073 ================================ 00:08:59.073 Supported: No 00:08:59.073 00:08:59.073 Admin Command Set Attributes 00:08:59.073 ============================ 00:08:59.073 Security Send/Receive: Not Supported 00:08:59.073 Format NVM: Supported 00:08:59.073 Firmware Activate/Download: Not Supported 00:08:59.073 Namespace Management: Supported 00:08:59.073 Device Self-Test: Not Supported 00:08:59.073 Directives: Supported 00:08:59.073 NVMe-MI: Not Supported 00:08:59.073 Virtualization Management: Not Supported 00:08:59.073 Doorbell Buffer Config: Supported 00:08:59.073 Get LBA Status Capability: Not Supported 00:08:59.073 Command & Feature Lockdown Capability: Not Supported 00:08:59.073 Abort Command Limit: 4 00:08:59.073 Async Event Request Limit: 4 00:08:59.073 Number of Firmware Slots: N/A 00:08:59.073 Firmware Slot 1 Read-Only: N/A 00:08:59.073 Firmware Activation Without Reset: N/A 00:08:59.073 Multiple Update Detection Support: N/A 00:08:59.073 Firmware Update Granularity: No Information Provided 00:08:59.073 Per-Namespace SMART Log: Yes 00:08:59.073 Asymmetric Namespace Access Log Page: Not Supported 00:08:59.073 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:59.073 Command Effects Log Page: Supported 00:08:59.073 Get Log Page Extended Data: Supported 00:08:59.073 Telemetry Log Pages: Not Supported 00:08:59.073 Persistent Event Log Pages: Not Supported 00:08:59.073 Supported Log Pages Log Page: May Support 00:08:59.073 Commands Supported & Effects Log Page: Not Supported 00:08:59.073 Feature Identifiers & Effects Log Page:May Support 00:08:59.073 NVMe-MI Commands & Effects Log Page: May Support 00:08:59.073 Data Area 4 for Telemetry Log: Not Supported 00:08:59.073 Error Log Page Entries Supported: 1 00:08:59.073 Keep Alive: Not Supported 00:08:59.073 00:08:59.073 NVM Command Set Attributes 00:08:59.073 ========================== 00:08:59.073 Submission Queue Entry Size 00:08:59.073 Max: 64 00:08:59.073 Min: 64 00:08:59.073 Completion Queue Entry Size 00:08:59.073 Max: 16 00:08:59.073 Min: 16 00:08:59.073 Number of Namespaces: 256 00:08:59.073 Compare Command: Supported 00:08:59.073 Write Uncorrectable Command: Not Supported 00:08:59.073 Dataset Management Command: Supported 00:08:59.073 Write Zeroes Command: Supported 00:08:59.073 Set Features Save Field: Supported 00:08:59.073 Reservations: Not Supported 00:08:59.073 Timestamp: Supported 00:08:59.073 Copy: Supported 00:08:59.073 Volatile Write Cache: Present 00:08:59.073 Atomic Write Unit (Normal): 1 00:08:59.073 Atomic Write Unit (PFail): 1 00:08:59.073 Atomic Compare & Write Unit: 1 00:08:59.073 Fused Compare & Write: Not Supported 00:08:59.073 Scatter-Gather List 00:08:59.073 SGL Command Set: Supported 00:08:59.073 SGL Keyed: Not Supported 00:08:59.073 SGL Bit Bucket Descriptor: Not Supported 00:08:59.073 SGL Metadata Pointer: Not Supported 00:08:59.073 Oversized SGL: Not Supported 00:08:59.073 SGL Metadata Address: Not Supported 00:08:59.073 SGL Offset: Not Supported 00:08:59.073 Transport SGL Data Block: Not Supported 00:08:59.073 Replay Protected Memory Block: Not Supported 00:08:59.073 00:08:59.073 Firmware Slot Information 00:08:59.073 ========================= 00:08:59.073 Active slot: 1 00:08:59.073 Slot 1 Firmware Revision: 1.0 00:08:59.073 00:08:59.073 00:08:59.073 Commands Supported and Effects 00:08:59.073 ============================== 00:08:59.073 Admin Commands 00:08:59.073 -------------- 00:08:59.073 Delete I/O Submission Queue (00h): Supported 00:08:59.073 Create I/O Submission Queue (01h): Supported 00:08:59.073 Get Log Page (02h): Supported 00:08:59.073 Delete I/O Completion Queue (04h): Supported 00:08:59.073 Create I/O Completion Queue (05h): Supported 00:08:59.073 Identify (06h): Supported 00:08:59.073 Abort (08h): Supported 00:08:59.073 Set Features (09h): Supported 00:08:59.073 Get Features (0Ah): Supported 00:08:59.073 Asynchronous Event Request (0Ch): Supported 00:08:59.073 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:59.073 Directive Send (19h): Supported 00:08:59.073 Directive Receive (1Ah): Supported 00:08:59.073 Virtualization Management (1Ch): Supported 00:08:59.073 Doorbell Buffer Config (7Ch): Supported 00:08:59.073 Format NVM (80h): Supported LBA-Change 00:08:59.073 I/O Commands 00:08:59.073 ------------ 00:08:59.073 Flush (00h): Supported LBA-Change 00:08:59.073 Write (01h): Supported LBA-Change 00:08:59.073 Read (02h): Supported 00:08:59.073 Compare (05h): Supported 00:08:59.074 Write Zeroes (08h): Supported LBA-Change 00:08:59.074 Dataset Management (09h): Supported LBA-Change 00:08:59.074 Unknown (0Ch): Supported 00:08:59.074 Unknown (12h): Supported 00:08:59.074 Copy (19h): Supported LBA-Change 00:08:59.074 Unknown (1Dh): Supported LBA-Change 00:08:59.074 00:08:59.074 Error Log 00:08:59.074 ========= 00:08:59.074 00:08:59.074 Arbitration 00:08:59.074 =========== 00:08:59.074 Arbitration Burst: no limit 00:08:59.074 00:08:59.074 Power Management 00:08:59.074 ================ 00:08:59.074 Number of Power States: 1 00:08:59.074 Current Power State: Power State #0 00:08:59.074 Power State #0: 00:08:59.074 Max Power: 25.00 W 00:08:59.074 Non-Operational State: Operational 00:08:59.074 Entry Latency: 16 microseconds 00:08:59.074 Exit Latency: 4 microseconds 00:08:59.074 Relative Read Throughput: 0 00:08:59.074 Relative Read Latency: 0 00:08:59.074 Relative Write Throughput: 0 00:08:59.074 Relative Write Latency: 0 00:08:59.074 Idle Power: Not Reported 00:08:59.074 Active Power: Not Reported 00:08:59.074 Non-Operational Permissive Mode: Not Supported 00:08:59.074 00:08:59.074 Health Information 00:08:59.074 ================== 00:08:59.074 Critical Warnings: 00:08:59.074 Available Spare Space: OK 00:08:59.074 Temperature: OK 00:08:59.074 Device Reliability: OK 00:08:59.074 Read Only: No 00:08:59.074 Volatile Memory Backup: OK 00:08:59.074 Current Temperature: 323 Kelvin (50 Celsius) 00:08:59.074 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:59.074 Available Spare: 0% 00:08:59.074 Available Spare Threshold: 0% 00:08:59.074 Life Percentage Used: 0% 00:08:59.074 Data Units Read: 1327 00:08:59.074 Data Units Written: 612 00:08:59.074 Host Read Commands: 60639 00:08:59.074 Host Write Commands: 29776 00:08:59.074 Controller Busy Time: 0 minutes 00:08:59.074 Power Cycles: 0 00:08:59.074 Power On Hours: 0 hours 00:08:59.074 Unsafe Shutdowns: 0 00:08:59.074 Unrecoverable Media Errors: 0 00:08:59.074 Lifetime Error Log Entries: 0 00:08:59.074 Warning Temperature Time: 0 minutes 00:08:59.074 Critical Temperature Time: 0 minutes 00:08:59.074 00:08:59.074 Number of Queues 00:08:59.074 ================ 00:08:59.074 Number of I/O Submission Queues: 64 00:08:59.074 Number of I/O Completion Queues: 64 00:08:59.074 00:08:59.074 ZNS Specific Controller Data 00:08:59.074 ============================ 00:08:59.074 Zone Append Size Limit: 0 00:08:59.074 00:08:59.074 00:08:59.074 Active Namespaces 00:08:59.074 ================= 00:08:59.074 Namespace ID:1 00:08:59.074 Error Recovery Timeout: Unlimited 00:08:59.074 Command Set Identifier: NVM (00h) 00:08:59.074 Deallocate: Supported 00:08:59.074 Deallocated/Unwritten Error: Supported 00:08:59.074 Deallocated Read Value: All 0x00 00:08:59.074 Deallocate in Write Zeroes: Not Supported 00:08:59.074 Deallocated Guard Field: 0xFFFF 00:08:59.074 Flush: Supported 00:08:59.074 Reservation: Not Supported 00:08:59.074 Namespace Sharing Capabilities: Multiple Controllers 00:08:59.074 Size (in LBAs): 262144 (1GiB) 00:08:59.074 Capacity (in LBAs): 262144 (1GiB) 00:08:59.074 Utilization (in LBAs): 262144 (1GiB) 00:08:59.074 Thin Provisioning: Not Supported 00:08:59.074 Per-NS Atomic Units: No 00:08:59.074 Maximum Single Source Range Length: 128 00:08:59.074 Maximum Copy Length: 128 00:08:59.074 Maximum Source Range Count: 128 00:08:59.074 NGUID/EUI64 Never Reused: No 00:08:59.074 Namespace Write Protected: No 00:08:59.074 Endurance group ID: 1 00:08:59.074 Number of LBA Formats: 8 00:08:59.074 Current LBA Format: LBA Format #04 00:08:59.074 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:59.074 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:59.074 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:59.074 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:59.074 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:59.074 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:59.074 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:59.074 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:59.074 00:08:59.074 Get Feature FDP: 00:08:59.074 ================ 00:08:59.074 Enabled: Yes 00:08:59.074 FDP configuration index: 0 00:08:59.074 00:08:59.074 FDP configurations log page 00:08:59.074 =========================== 00:08:59.074 Number of FDP configurations: 1 00:08:59.074 Version: 0 00:08:59.074 Size: 112 00:08:59.074 FDP Configuration Descriptor: 0 00:08:59.074 Descriptor Size: 96 00:08:59.074 Reclaim Group Identifier format: 2 00:08:59.074 FDP Volatile Write Cache: Not Present 00:08:59.074 FDP Configuration: Valid 00:08:59.074 Vendor Specific Size: 0 00:08:59.074 Number of Reclaim Groups: 2 00:08:59.074 Number of Recalim Unit Handles: 8 00:08:59.074 Max Placement Identifiers: 128 00:08:59.074 Number of Namespaces Suppprted: 256 00:08:59.074 Reclaim unit Nominal Size: 6000000 bytes 00:08:59.074 Estimated Reclaim Unit Time Limit: Not Reported 00:08:59.074 RUH Desc #000: RUH Type: Initially Isolated 00:08:59.074 RUH Desc #001: RUH Type: Initially Isolated 00:08:59.074 RUH Desc #002: RUH Type: Initially Isolated 00:08:59.074 RUH Desc #003: RUH Type: Initially Isolated 00:08:59.074 RUH Desc #004: RUH Type: Initially Isolated 00:08:59.074 RUH Desc #005: RUH Type: Initially Isolated 00:08:59.074 RUH Desc #006: RUH Type: Initially Isolated 00:08:59.074 RUH Desc #007: RUH Type: Initially Isolated 00:08:59.074 00:08:59.074 FDP reclaim unit handle usage log page 00:08:59.074 ====================================== 00:08:59.074 Number of Reclaim Unit Handles: 8 00:08:59.074 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:59.074 RUH Usage Desc #001: RUH Attributes: Unused 00:08:59.074 RUH Usage Desc #002: RUH Attributes: Unused 00:08:59.074 RUH Usage Desc #003: RUH Attributes: Unused 00:08:59.074 RUH Usage Desc #004: RUH Attributes: Unused 00:08:59.074 RUH Usage Desc #005: RUH Attributes: Unused 00:08:59.074 RUH Usage Desc #006: RUH Attributes: Unused 00:08:59.074 RUH Usage Desc #007: RUH Attributes: Unused 00:08:59.074 00:08:59.074 FDP statistics log page 00:08:59.074 ======================= 00:08:59.074 Host bytes with metadata written: 402612224 00:08:59.074 Media bytes with metadata written: 402698240 00:08:59.074 Media bytes erased: 0 00:08:59.074 00:08:59.074 FDP events log page 00:08:59.074 =================== 00:08:59.074 Number of FDP events: 0 00:08:59.074 00:08:59.074 00:08:59.074 real 0m1.159s 00:08:59.074 user 0m0.398s 00:08:59.074 sys 0m0.539s 00:08:59.074 10:36:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.074 10:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:59.074 ************************************ 00:08:59.074 END TEST nvme_identify 00:08:59.074 ************************************ 00:08:59.074 10:36:29 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:59.074 10:36:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:59.074 10:36:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.074 10:36:29 -- common/autotest_common.sh@10 -- # set +x 00:08:59.074 ************************************ 00:08:59.074 START TEST nvme_perf 00:08:59.074 ************************************ 00:08:59.074 10:36:29 -- common/autotest_common.sh@1114 -- # nvme_perf 00:08:59.074 10:36:29 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:00.454 Initializing NVMe Controllers 00:09:00.454 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:00.454 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:00.454 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:00.454 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:00.454 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:00.454 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:00.454 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:00.454 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:00.454 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:00.454 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:00.454 Initialization complete. Launching workers. 00:09:00.454 ======================================================== 00:09:00.454 Latency(us) 00:09:00.454 Device Information : IOPS MiB/s Average min max 00:09:00.454 PCIE (0000:00:06.0) NSID 1 from core 0: 18205.65 213.35 7026.88 5094.61 31826.62 00:09:00.454 PCIE (0000:00:07.0) NSID 1 from core 0: 18205.65 213.35 7021.25 5089.69 30363.51 00:09:00.454 PCIE (0000:00:09.0) NSID 1 from core 0: 18205.65 213.35 7014.36 5232.57 29751.69 00:09:00.454 PCIE (0000:00:08.0) NSID 1 from core 0: 18205.65 213.35 7007.32 5245.91 28246.01 00:09:00.454 PCIE (0000:00:08.0) NSID 2 from core 0: 18205.65 213.35 7000.40 5225.81 26741.11 00:09:00.454 PCIE (0000:00:08.0) NSID 3 from core 0: 18332.97 214.84 6945.37 5243.22 18366.29 00:09:00.454 ======================================================== 00:09:00.454 Total : 109361.23 1281.58 7002.53 5089.69 31826.62 00:09:00.454 00:09:00.454 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:00.454 ================================================================================= 00:09:00.454 1.00000% : 5318.498us 00:09:00.454 10.00000% : 5620.972us 00:09:00.454 25.00000% : 5999.065us 00:09:00.454 50.00000% : 6604.012us 00:09:00.454 75.00000% : 7208.960us 00:09:00.454 90.00000% : 8267.618us 00:09:00.454 95.00000% : 9779.988us 00:09:00.454 98.00000% : 12804.726us 00:09:00.454 99.00000% : 16434.412us 00:09:00.454 99.50000% : 29440.788us 00:09:00.454 99.90000% : 31457.280us 00:09:00.454 99.99000% : 31860.578us 00:09:00.454 99.99900% : 31860.578us 00:09:00.454 99.99990% : 31860.578us 00:09:00.454 99.99999% : 31860.578us 00:09:00.454 00:09:00.454 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:00.454 ================================================================================= 00:09:00.454 1.00000% : 5419.323us 00:09:00.454 10.00000% : 5721.797us 00:09:00.454 25.00000% : 6074.683us 00:09:00.454 50.00000% : 6604.012us 00:09:00.454 75.00000% : 7108.135us 00:09:00.454 90.00000% : 8217.206us 00:09:00.454 95.00000% : 9628.751us 00:09:00.454 98.00000% : 13812.972us 00:09:00.454 99.00000% : 17644.308us 00:09:00.454 99.50000% : 28029.243us 00:09:00.454 99.90000% : 30045.735us 00:09:00.454 99.99000% : 30449.034us 00:09:00.454 99.99900% : 30449.034us 00:09:00.454 99.99990% : 30449.034us 00:09:00.454 99.99999% : 30449.034us 00:09:00.454 00:09:00.454 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:00.454 ================================================================================= 00:09:00.454 1.00000% : 5444.529us 00:09:00.454 10.00000% : 5747.003us 00:09:00.454 25.00000% : 6074.683us 00:09:00.454 50.00000% : 6604.012us 00:09:00.454 75.00000% : 7108.135us 00:09:00.454 90.00000% : 7965.145us 00:09:00.454 95.00000% : 10284.111us 00:09:00.454 98.00000% : 14518.745us 00:09:00.454 99.00000% : 15728.640us 00:09:00.454 99.50000% : 27424.295us 00:09:00.454 99.90000% : 29440.788us 00:09:00.454 99.99000% : 29844.086us 00:09:00.454 99.99900% : 29844.086us 00:09:00.454 99.99990% : 29844.086us 00:09:00.454 99.99999% : 29844.086us 00:09:00.454 00:09:00.454 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:00.454 ================================================================================= 00:09:00.454 1.00000% : 5444.529us 00:09:00.454 10.00000% : 5747.003us 00:09:00.454 25.00000% : 6074.683us 00:09:00.454 50.00000% : 6604.012us 00:09:00.454 75.00000% : 7108.135us 00:09:00.454 90.00000% : 7864.320us 00:09:00.454 95.00000% : 10939.471us 00:09:00.454 98.00000% : 13510.498us 00:09:00.454 99.00000% : 15224.517us 00:09:00.454 99.50000% : 25811.102us 00:09:00.454 99.90000% : 27827.594us 00:09:00.454 99.99000% : 28230.892us 00:09:00.454 99.99900% : 28432.542us 00:09:00.454 99.99990% : 28432.542us 00:09:00.454 99.99999% : 28432.542us 00:09:00.454 00:09:00.454 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:00.454 ================================================================================= 00:09:00.454 1.00000% : 5419.323us 00:09:00.454 10.00000% : 5747.003us 00:09:00.454 25.00000% : 6074.683us 00:09:00.454 50.00000% : 6604.012us 00:09:00.454 75.00000% : 7108.135us 00:09:00.454 90.00000% : 8015.557us 00:09:00.454 95.00000% : 10939.471us 00:09:00.454 98.00000% : 12855.138us 00:09:00.454 99.00000% : 14821.218us 00:09:00.454 99.50000% : 24399.557us 00:09:00.454 99.90000% : 26416.049us 00:09:00.454 99.99000% : 26819.348us 00:09:00.454 99.99900% : 26819.348us 00:09:00.454 99.99990% : 26819.348us 00:09:00.454 99.99999% : 26819.348us 00:09:00.454 00:09:00.454 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:00.454 ================================================================================= 00:09:00.454 1.00000% : 5419.323us 00:09:00.454 10.00000% : 5747.003us 00:09:00.454 25.00000% : 6074.683us 00:09:00.454 50.00000% : 6604.012us 00:09:00.454 75.00000% : 7158.548us 00:09:00.454 90.00000% : 8116.382us 00:09:00.454 95.00000% : 10586.585us 00:09:00.454 98.00000% : 12401.428us 00:09:00.454 99.00000% : 15325.342us 00:09:00.455 99.50000% : 16535.237us 00:09:00.455 99.90000% : 17946.782us 00:09:00.455 99.99000% : 18350.080us 00:09:00.455 99.99900% : 18450.905us 00:09:00.455 99.99990% : 18450.905us 00:09:00.455 99.99999% : 18450.905us 00:09:00.455 00:09:00.455 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:00.455 ============================================================================== 00:09:00.455 Range in us Cumulative IO count 00:09:00.455 5091.643 - 5116.849: 0.0273% ( 5) 00:09:00.455 5116.849 - 5142.055: 0.0710% ( 8) 00:09:00.455 5142.055 - 5167.262: 0.1147% ( 8) 00:09:00.455 5167.262 - 5192.468: 0.1912% ( 14) 00:09:00.455 5192.468 - 5217.674: 0.2841% ( 17) 00:09:00.455 5217.674 - 5242.880: 0.4480% ( 30) 00:09:00.455 5242.880 - 5268.086: 0.5736% ( 23) 00:09:00.455 5268.086 - 5293.292: 0.7758% ( 37) 00:09:00.455 5293.292 - 5318.498: 1.0052% ( 42) 00:09:00.455 5318.498 - 5343.705: 1.3440% ( 62) 00:09:00.455 5343.705 - 5368.911: 1.8138% ( 86) 00:09:00.455 5368.911 - 5394.117: 2.3765% ( 103) 00:09:00.455 5394.117 - 5419.323: 3.0813% ( 129) 00:09:00.455 5419.323 - 5444.529: 3.9062% ( 151) 00:09:00.455 5444.529 - 5469.735: 4.8186% ( 167) 00:09:00.455 5469.735 - 5494.942: 5.6764% ( 157) 00:09:00.455 5494.942 - 5520.148: 6.5505% ( 160) 00:09:00.455 5520.148 - 5545.354: 7.4956% ( 173) 00:09:00.455 5545.354 - 5570.560: 8.3807% ( 162) 00:09:00.455 5570.560 - 5595.766: 9.2821% ( 165) 00:09:00.455 5595.766 - 5620.972: 10.2273% ( 173) 00:09:00.455 5620.972 - 5646.178: 11.1724% ( 173) 00:09:00.455 5646.178 - 5671.385: 12.1285% ( 175) 00:09:00.455 5671.385 - 5696.591: 13.0791% ( 174) 00:09:00.455 5696.591 - 5721.797: 14.0898% ( 185) 00:09:00.455 5721.797 - 5747.003: 15.0131% ( 169) 00:09:00.455 5747.003 - 5772.209: 16.1003% ( 199) 00:09:00.455 5772.209 - 5797.415: 17.0946% ( 182) 00:09:00.455 5797.415 - 5822.622: 18.0616% ( 177) 00:09:00.455 5822.622 - 5847.828: 19.0396% ( 179) 00:09:00.455 5847.828 - 5873.034: 20.1541% ( 204) 00:09:00.455 5873.034 - 5898.240: 21.0719% ( 168) 00:09:00.455 5898.240 - 5923.446: 22.0717% ( 183) 00:09:00.455 5923.446 - 5948.652: 23.1316% ( 194) 00:09:00.455 5948.652 - 5973.858: 24.1040% ( 178) 00:09:00.455 5973.858 - 5999.065: 25.1803% ( 197) 00:09:00.455 5999.065 - 6024.271: 26.1254% ( 173) 00:09:00.455 6024.271 - 6049.477: 27.1416% ( 186) 00:09:00.455 6049.477 - 6074.683: 28.2233% ( 198) 00:09:00.455 6074.683 - 6099.889: 29.2723% ( 192) 00:09:00.455 6099.889 - 6125.095: 30.2885% ( 186) 00:09:00.455 6125.095 - 6150.302: 31.3156% ( 188) 00:09:00.455 6150.302 - 6175.508: 32.3208% ( 184) 00:09:00.455 6175.508 - 6200.714: 33.3861% ( 195) 00:09:00.455 6200.714 - 6225.920: 34.4078% ( 187) 00:09:00.455 6225.920 - 6251.126: 35.4677% ( 194) 00:09:00.455 6251.126 - 6276.332: 36.4893% ( 187) 00:09:00.455 6276.332 - 6301.538: 37.6311% ( 209) 00:09:00.455 6301.538 - 6326.745: 38.6473% ( 186) 00:09:00.455 6326.745 - 6351.951: 39.6471% ( 183) 00:09:00.455 6351.951 - 6377.157: 40.7343% ( 199) 00:09:00.455 6377.157 - 6402.363: 41.8269% ( 200) 00:09:00.455 6402.363 - 6427.569: 42.8049% ( 179) 00:09:00.455 6427.569 - 6452.775: 43.8757% ( 196) 00:09:00.455 6452.775 - 6503.188: 46.0118% ( 391) 00:09:00.455 6503.188 - 6553.600: 48.0605% ( 375) 00:09:00.455 6553.600 - 6604.012: 50.2404% ( 399) 00:09:00.455 6604.012 - 6654.425: 52.3164% ( 380) 00:09:00.455 6654.425 - 6704.837: 54.4690% ( 394) 00:09:00.455 6704.837 - 6755.249: 56.6324% ( 396) 00:09:00.455 6755.249 - 6805.662: 58.8396% ( 404) 00:09:00.455 6805.662 - 6856.074: 60.9320% ( 383) 00:09:00.455 6856.074 - 6906.486: 63.0627% ( 390) 00:09:00.455 6906.486 - 6956.898: 65.1552% ( 383) 00:09:00.455 6956.898 - 7007.311: 67.3405% ( 400) 00:09:00.455 7007.311 - 7057.723: 69.3728% ( 372) 00:09:00.455 7057.723 - 7108.135: 71.3942% ( 370) 00:09:00.455 7108.135 - 7158.548: 73.3228% ( 353) 00:09:00.455 7158.548 - 7208.960: 75.0492% ( 316) 00:09:00.455 7208.960 - 7259.372: 76.8739% ( 334) 00:09:00.455 7259.372 - 7309.785: 78.6276% ( 321) 00:09:00.455 7309.785 - 7360.197: 80.1246% ( 274) 00:09:00.455 7360.197 - 7410.609: 81.4795% ( 248) 00:09:00.455 7410.609 - 7461.022: 82.7251% ( 228) 00:09:00.455 7461.022 - 7511.434: 83.8505% ( 206) 00:09:00.455 7511.434 - 7561.846: 84.6919% ( 154) 00:09:00.455 7561.846 - 7612.258: 85.4840% ( 145) 00:09:00.455 7612.258 - 7662.671: 86.0795% ( 109) 00:09:00.455 7662.671 - 7713.083: 86.5876% ( 93) 00:09:00.455 7713.083 - 7763.495: 86.9537% ( 67) 00:09:00.455 7763.495 - 7813.908: 87.3252% ( 68) 00:09:00.455 7813.908 - 7864.320: 87.6530% ( 60) 00:09:00.455 7864.320 - 7914.732: 87.9480% ( 54) 00:09:00.455 7914.732 - 7965.145: 88.2867% ( 62) 00:09:00.455 7965.145 - 8015.557: 88.5817% ( 54) 00:09:00.455 8015.557 - 8065.969: 88.8877% ( 56) 00:09:00.455 8065.969 - 8116.382: 89.1444% ( 47) 00:09:00.455 8116.382 - 8166.794: 89.4449% ( 55) 00:09:00.455 8166.794 - 8217.206: 89.6962% ( 46) 00:09:00.455 8217.206 - 8267.618: 90.0240% ( 60) 00:09:00.455 8267.618 - 8318.031: 90.2863% ( 48) 00:09:00.455 8318.031 - 8368.443: 90.5376% ( 46) 00:09:00.455 8368.443 - 8418.855: 90.7561% ( 40) 00:09:00.455 8418.855 - 8469.268: 90.9364% ( 33) 00:09:00.455 8469.268 - 8519.680: 91.1549% ( 40) 00:09:00.455 8519.680 - 8570.092: 91.3625% ( 38) 00:09:00.455 8570.092 - 8620.505: 91.5428% ( 33) 00:09:00.455 8620.505 - 8670.917: 91.7504% ( 38) 00:09:00.455 8670.917 - 8721.329: 91.9580% ( 38) 00:09:00.455 8721.329 - 8771.742: 92.1547% ( 36) 00:09:00.455 8771.742 - 8822.154: 92.3295% ( 32) 00:09:00.455 8822.154 - 8872.566: 92.5317% ( 37) 00:09:00.455 8872.566 - 8922.978: 92.6901% ( 29) 00:09:00.455 8922.978 - 8973.391: 92.8431% ( 28) 00:09:00.455 8973.391 - 9023.803: 93.0070% ( 30) 00:09:00.455 9023.803 - 9074.215: 93.1818% ( 32) 00:09:00.455 9074.215 - 9124.628: 93.3293% ( 27) 00:09:00.455 9124.628 - 9175.040: 93.4659% ( 25) 00:09:00.455 9175.040 - 9225.452: 93.6298% ( 30) 00:09:00.455 9225.452 - 9275.865: 93.7500% ( 22) 00:09:00.455 9275.865 - 9326.277: 93.8702% ( 22) 00:09:00.455 9326.277 - 9376.689: 94.0450% ( 32) 00:09:00.455 9376.689 - 9427.102: 94.1543% ( 20) 00:09:00.455 9427.102 - 9477.514: 94.2745% ( 22) 00:09:00.455 9477.514 - 9527.926: 94.4165% ( 26) 00:09:00.455 9527.926 - 9578.338: 94.5913% ( 32) 00:09:00.455 9578.338 - 9628.751: 94.7225% ( 24) 00:09:00.455 9628.751 - 9679.163: 94.8645% ( 26) 00:09:00.455 9679.163 - 9729.575: 94.9847% ( 22) 00:09:00.455 9729.575 - 9779.988: 95.0776% ( 17) 00:09:00.455 9779.988 - 9830.400: 95.1595% ( 15) 00:09:00.455 9830.400 - 9880.812: 95.2360% ( 14) 00:09:00.455 9880.812 - 9931.225: 95.3234% ( 16) 00:09:00.455 9931.225 - 9981.637: 95.4163% ( 17) 00:09:00.455 9981.637 - 10032.049: 95.4983% ( 15) 00:09:00.455 10032.049 - 10082.462: 95.5638% ( 12) 00:09:00.455 10082.462 - 10132.874: 95.6294% ( 12) 00:09:00.455 10132.874 - 10183.286: 95.7059% ( 14) 00:09:00.455 10183.286 - 10233.698: 95.7878% ( 15) 00:09:00.455 10233.698 - 10284.111: 95.8424% ( 10) 00:09:00.455 10284.111 - 10334.523: 95.9080% ( 12) 00:09:00.455 10334.523 - 10384.935: 95.9626% ( 10) 00:09:00.455 10384.935 - 10435.348: 96.0391% ( 14) 00:09:00.455 10435.348 - 10485.760: 96.0883% ( 9) 00:09:00.455 10485.760 - 10536.172: 96.1429% ( 10) 00:09:00.455 10536.172 - 10586.585: 96.1866% ( 8) 00:09:00.455 10586.585 - 10636.997: 96.2413% ( 10) 00:09:00.455 10636.997 - 10687.409: 96.3014% ( 11) 00:09:00.455 10687.409 - 10737.822: 96.3505% ( 9) 00:09:00.455 10737.822 - 10788.234: 96.3833% ( 6) 00:09:00.455 10788.234 - 10838.646: 96.4379% ( 10) 00:09:00.455 10838.646 - 10889.058: 96.4762% ( 7) 00:09:00.455 10889.058 - 10939.471: 96.5253% ( 9) 00:09:00.455 10939.471 - 10989.883: 96.5854% ( 11) 00:09:00.455 10989.883 - 11040.295: 96.6237% ( 7) 00:09:00.455 11040.295 - 11090.708: 96.6619% ( 7) 00:09:00.455 11090.708 - 11141.120: 96.7002% ( 7) 00:09:00.455 11141.120 - 11191.532: 96.7439% ( 8) 00:09:00.455 11191.532 - 11241.945: 96.7767% ( 6) 00:09:00.455 11241.945 - 11292.357: 96.8422% ( 12) 00:09:00.455 11292.357 - 11342.769: 96.9023% ( 11) 00:09:00.455 11342.769 - 11393.182: 96.9460% ( 8) 00:09:00.455 11393.182 - 11443.594: 97.0061% ( 11) 00:09:00.455 11443.594 - 11494.006: 97.0608% ( 10) 00:09:00.455 11494.006 - 11544.418: 97.1154% ( 10) 00:09:00.455 11544.418 - 11594.831: 97.1755% ( 11) 00:09:00.455 11594.831 - 11645.243: 97.2247% ( 9) 00:09:00.455 11645.243 - 11695.655: 97.2793% ( 10) 00:09:00.455 11695.655 - 11746.068: 97.3175% ( 7) 00:09:00.455 11746.068 - 11796.480: 97.3722% ( 10) 00:09:00.455 11796.480 - 11846.892: 97.4104% ( 7) 00:09:00.455 11846.892 - 11897.305: 97.4432% ( 6) 00:09:00.455 11897.305 - 11947.717: 97.4705% ( 5) 00:09:00.455 11947.717 - 11998.129: 97.4978% ( 5) 00:09:00.455 11998.129 - 12048.542: 97.5361% ( 7) 00:09:00.455 12048.542 - 12098.954: 97.5688% ( 6) 00:09:00.455 12098.954 - 12149.366: 97.5962% ( 5) 00:09:00.455 12149.366 - 12199.778: 97.6289% ( 6) 00:09:00.455 12199.778 - 12250.191: 97.6617% ( 6) 00:09:00.455 12250.191 - 12300.603: 97.6945% ( 6) 00:09:00.455 12300.603 - 12351.015: 97.7218% ( 5) 00:09:00.455 12351.015 - 12401.428: 97.7601% ( 7) 00:09:00.455 12401.428 - 12451.840: 97.7874% ( 5) 00:09:00.455 12451.840 - 12502.252: 97.8256% ( 7) 00:09:00.455 12502.252 - 12552.665: 97.8584% ( 6) 00:09:00.455 12552.665 - 12603.077: 97.8857% ( 5) 00:09:00.455 12603.077 - 12653.489: 97.9185% ( 6) 00:09:00.456 12653.489 - 12703.902: 97.9513% ( 6) 00:09:00.456 12703.902 - 12754.314: 97.9786% ( 5) 00:09:00.456 12754.314 - 12804.726: 98.0059% ( 5) 00:09:00.456 12804.726 - 12855.138: 98.0278% ( 4) 00:09:00.456 12855.138 - 12905.551: 98.0496% ( 4) 00:09:00.456 12905.551 - 13006.375: 98.0988% ( 9) 00:09:00.456 13006.375 - 13107.200: 98.1534% ( 10) 00:09:00.456 13107.200 - 13208.025: 98.1917% ( 7) 00:09:00.456 13208.025 - 13308.849: 98.2354% ( 8) 00:09:00.456 13308.849 - 13409.674: 98.2845% ( 9) 00:09:00.456 13409.674 - 13510.498: 98.3337% ( 9) 00:09:00.456 13510.498 - 13611.323: 98.3665% ( 6) 00:09:00.456 13611.323 - 13712.148: 98.3829% ( 3) 00:09:00.456 13712.148 - 13812.972: 98.4047% ( 4) 00:09:00.456 13812.972 - 13913.797: 98.4211% ( 3) 00:09:00.456 13913.797 - 14014.622: 98.4375% ( 3) 00:09:00.456 14014.622 - 14115.446: 98.4757% ( 7) 00:09:00.456 14115.446 - 14216.271: 98.5085% ( 6) 00:09:00.456 14216.271 - 14317.095: 98.5468% ( 7) 00:09:00.456 14317.095 - 14417.920: 98.5850% ( 7) 00:09:00.456 14417.920 - 14518.745: 98.6123% ( 5) 00:09:00.456 14518.745 - 14619.569: 98.6451% ( 6) 00:09:00.456 14619.569 - 14720.394: 98.6779% ( 6) 00:09:00.456 14720.394 - 14821.218: 98.7107% ( 6) 00:09:00.456 14821.218 - 14922.043: 98.7489% ( 7) 00:09:00.456 14922.043 - 15022.868: 98.7708% ( 4) 00:09:00.456 15022.868 - 15123.692: 98.7872% ( 3) 00:09:00.456 15123.692 - 15224.517: 98.8035% ( 3) 00:09:00.456 15224.517 - 15325.342: 98.8199% ( 3) 00:09:00.456 15325.342 - 15426.166: 98.8363% ( 3) 00:09:00.456 15426.166 - 15526.991: 98.8527% ( 3) 00:09:00.456 15526.991 - 15627.815: 98.8746% ( 4) 00:09:00.456 15627.815 - 15728.640: 98.8855% ( 2) 00:09:00.456 15728.640 - 15829.465: 98.9019% ( 3) 00:09:00.456 15829.465 - 15930.289: 98.9237% ( 4) 00:09:00.456 15930.289 - 16031.114: 98.9347% ( 2) 00:09:00.456 16031.114 - 16131.938: 98.9510% ( 3) 00:09:00.456 16131.938 - 16232.763: 98.9729% ( 4) 00:09:00.456 16232.763 - 16333.588: 98.9838% ( 2) 00:09:00.456 16333.588 - 16434.412: 99.0057% ( 4) 00:09:00.456 16434.412 - 16535.237: 99.0221% ( 3) 00:09:00.456 16535.237 - 16636.062: 99.0385% ( 3) 00:09:00.456 16636.062 - 16736.886: 99.0549% ( 3) 00:09:00.456 16736.886 - 16837.711: 99.0712% ( 3) 00:09:00.456 16837.711 - 16938.535: 99.0876% ( 3) 00:09:00.456 16938.535 - 17039.360: 99.1040% ( 3) 00:09:00.456 17039.360 - 17140.185: 99.1259% ( 4) 00:09:00.456 17140.185 - 17241.009: 99.1368% ( 2) 00:09:00.456 17241.009 - 17341.834: 99.1587% ( 4) 00:09:00.456 17341.834 - 17442.658: 99.1696% ( 2) 00:09:00.456 17442.658 - 17543.483: 99.1860% ( 3) 00:09:00.456 17543.483 - 17644.308: 99.2078% ( 4) 00:09:00.456 17644.308 - 17745.132: 99.2188% ( 2) 00:09:00.456 17745.132 - 17845.957: 99.2406% ( 4) 00:09:00.456 17845.957 - 17946.782: 99.2515% ( 2) 00:09:00.456 17946.782 - 18047.606: 99.2734% ( 4) 00:09:00.456 18047.606 - 18148.431: 99.2898% ( 3) 00:09:00.456 18148.431 - 18249.255: 99.3007% ( 2) 00:09:00.456 28230.892 - 28432.542: 99.3280% ( 5) 00:09:00.456 28432.542 - 28634.191: 99.3663% ( 7) 00:09:00.456 28634.191 - 28835.840: 99.3990% ( 6) 00:09:00.456 28835.840 - 29037.489: 99.4427% ( 8) 00:09:00.456 29037.489 - 29239.138: 99.4810% ( 7) 00:09:00.456 29239.138 - 29440.788: 99.5247% ( 8) 00:09:00.456 29440.788 - 29642.437: 99.5629% ( 7) 00:09:00.456 29642.437 - 29844.086: 99.6012% ( 7) 00:09:00.456 29844.086 - 30045.735: 99.6449% ( 8) 00:09:00.456 30045.735 - 30247.385: 99.6831% ( 7) 00:09:00.456 30247.385 - 30449.034: 99.7214% ( 7) 00:09:00.456 30449.034 - 30650.683: 99.7596% ( 7) 00:09:00.456 30650.683 - 30852.332: 99.7979% ( 7) 00:09:00.456 30852.332 - 31053.982: 99.8416% ( 8) 00:09:00.456 31053.982 - 31255.631: 99.8853% ( 8) 00:09:00.456 31255.631 - 31457.280: 99.9290% ( 8) 00:09:00.456 31457.280 - 31658.929: 99.9672% ( 7) 00:09:00.456 31658.929 - 31860.578: 100.0000% ( 6) 00:09:00.456 00:09:00.456 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:00.456 ============================================================================== 00:09:00.456 Range in us Cumulative IO count 00:09:00.456 5066.437 - 5091.643: 0.0055% ( 1) 00:09:00.456 5091.643 - 5116.849: 0.0765% ( 13) 00:09:00.456 5116.849 - 5142.055: 0.1475% ( 13) 00:09:00.456 5142.055 - 5167.262: 0.1694% ( 4) 00:09:00.456 5167.262 - 5192.468: 0.1912% ( 4) 00:09:00.456 5192.468 - 5217.674: 0.2076% ( 3) 00:09:00.456 5217.674 - 5242.880: 0.2349% ( 5) 00:09:00.456 5242.880 - 5268.086: 0.2786% ( 8) 00:09:00.456 5268.086 - 5293.292: 0.3333% ( 10) 00:09:00.456 5293.292 - 5318.498: 0.3988% ( 12) 00:09:00.456 5318.498 - 5343.705: 0.5081% ( 20) 00:09:00.456 5343.705 - 5368.911: 0.7157% ( 38) 00:09:00.456 5368.911 - 5394.117: 0.9615% ( 45) 00:09:00.456 5394.117 - 5419.323: 1.2019% ( 44) 00:09:00.456 5419.323 - 5444.529: 1.5024% ( 55) 00:09:00.456 5444.529 - 5469.735: 1.8848% ( 70) 00:09:00.456 5469.735 - 5494.942: 2.3547% ( 86) 00:09:00.456 5494.942 - 5520.148: 2.8191% ( 85) 00:09:00.456 5520.148 - 5545.354: 3.5238% ( 129) 00:09:00.456 5545.354 - 5570.560: 4.2177% ( 127) 00:09:00.456 5570.560 - 5595.766: 5.0372% ( 150) 00:09:00.456 5595.766 - 5620.972: 5.8949% ( 157) 00:09:00.456 5620.972 - 5646.178: 6.7799% ( 162) 00:09:00.456 5646.178 - 5671.385: 7.7360% ( 175) 00:09:00.456 5671.385 - 5696.591: 8.8014% ( 195) 00:09:00.456 5696.591 - 5721.797: 10.0361% ( 226) 00:09:00.456 5721.797 - 5747.003: 11.1560% ( 205) 00:09:00.456 5747.003 - 5772.209: 12.2214% ( 195) 00:09:00.456 5772.209 - 5797.415: 13.3468% ( 206) 00:09:00.456 5797.415 - 5822.622: 14.4941% ( 210) 00:09:00.456 5822.622 - 5847.828: 15.6414% ( 210) 00:09:00.456 5847.828 - 5873.034: 16.7887% ( 210) 00:09:00.456 5873.034 - 5898.240: 17.9524% ( 213) 00:09:00.456 5898.240 - 5923.446: 19.1434% ( 218) 00:09:00.456 5923.446 - 5948.652: 20.2633% ( 205) 00:09:00.456 5948.652 - 5973.858: 21.3615% ( 201) 00:09:00.456 5973.858 - 5999.065: 22.5524% ( 218) 00:09:00.456 5999.065 - 6024.271: 23.7107% ( 212) 00:09:00.456 6024.271 - 6049.477: 24.8634% ( 211) 00:09:00.456 6049.477 - 6074.683: 26.0544% ( 218) 00:09:00.456 6074.683 - 6099.889: 27.2563% ( 220) 00:09:00.456 6099.889 - 6125.095: 28.5020% ( 228) 00:09:00.456 6125.095 - 6150.302: 29.6711% ( 214) 00:09:00.456 6150.302 - 6175.508: 30.8785% ( 221) 00:09:00.456 6175.508 - 6200.714: 32.1023% ( 224) 00:09:00.456 6200.714 - 6225.920: 33.3534% ( 229) 00:09:00.456 6225.920 - 6251.126: 34.5826% ( 225) 00:09:00.456 6251.126 - 6276.332: 35.7736% ( 218) 00:09:00.456 6276.332 - 6301.538: 37.0192% ( 228) 00:09:00.456 6301.538 - 6326.745: 38.2703% ( 229) 00:09:00.456 6326.745 - 6351.951: 39.4722% ( 220) 00:09:00.456 6351.951 - 6377.157: 40.6632% ( 218) 00:09:00.456 6377.157 - 6402.363: 41.8979% ( 226) 00:09:00.456 6402.363 - 6427.569: 43.1709% ( 233) 00:09:00.456 6427.569 - 6452.775: 44.4165% ( 228) 00:09:00.456 6452.775 - 6503.188: 46.8040% ( 437) 00:09:00.456 6503.188 - 6553.600: 49.3171% ( 460) 00:09:00.456 6553.600 - 6604.012: 51.7810% ( 451) 00:09:00.456 6604.012 - 6654.425: 54.2778% ( 457) 00:09:00.456 6654.425 - 6704.837: 56.8455% ( 470) 00:09:00.456 6704.837 - 6755.249: 59.2603% ( 442) 00:09:00.456 6755.249 - 6805.662: 61.6477% ( 437) 00:09:00.456 6805.662 - 6856.074: 64.0406% ( 438) 00:09:00.456 6856.074 - 6906.486: 66.4609% ( 443) 00:09:00.456 6906.486 - 6956.898: 68.8210% ( 432) 00:09:00.456 6956.898 - 7007.311: 71.0391% ( 406) 00:09:00.456 7007.311 - 7057.723: 73.1316% ( 383) 00:09:00.456 7057.723 - 7108.135: 75.1366% ( 367) 00:09:00.456 7108.135 - 7158.548: 77.0160% ( 344) 00:09:00.456 7158.548 - 7208.960: 78.7806% ( 323) 00:09:00.456 7208.960 - 7259.372: 80.4469% ( 305) 00:09:00.456 7259.372 - 7309.785: 81.9712% ( 279) 00:09:00.456 7309.785 - 7360.197: 83.2605% ( 236) 00:09:00.456 7360.197 - 7410.609: 84.3094% ( 192) 00:09:00.456 7410.609 - 7461.022: 85.1180% ( 148) 00:09:00.456 7461.022 - 7511.434: 85.7791% ( 121) 00:09:00.456 7511.434 - 7561.846: 86.3145% ( 98) 00:09:00.456 7561.846 - 7612.258: 86.6860% ( 68) 00:09:00.456 7612.258 - 7662.671: 86.9974% ( 57) 00:09:00.456 7662.671 - 7713.083: 87.2869% ( 53) 00:09:00.456 7713.083 - 7763.495: 87.5710% ( 52) 00:09:00.456 7763.495 - 7813.908: 87.8606% ( 53) 00:09:00.456 7813.908 - 7864.320: 88.1665% ( 56) 00:09:00.456 7864.320 - 7914.732: 88.4397% ( 50) 00:09:00.456 7914.732 - 7965.145: 88.7456% ( 56) 00:09:00.456 7965.145 - 8015.557: 89.0133% ( 49) 00:09:00.456 8015.557 - 8065.969: 89.2920% ( 51) 00:09:00.456 8065.969 - 8116.382: 89.6143% ( 59) 00:09:00.456 8116.382 - 8166.794: 89.8711% ( 47) 00:09:00.456 8166.794 - 8217.206: 90.1388% ( 49) 00:09:00.456 8217.206 - 8267.618: 90.3792% ( 44) 00:09:00.456 8267.618 - 8318.031: 90.5813% ( 37) 00:09:00.456 8318.031 - 8368.443: 90.7834% ( 37) 00:09:00.456 8368.443 - 8418.855: 90.9965% ( 39) 00:09:00.456 8418.855 - 8469.268: 91.2041% ( 38) 00:09:00.456 8469.268 - 8519.680: 91.4117% ( 38) 00:09:00.456 8519.680 - 8570.092: 91.6193% ( 38) 00:09:00.456 8570.092 - 8620.505: 91.8215% ( 37) 00:09:00.456 8620.505 - 8670.917: 92.0072% ( 34) 00:09:00.456 8670.917 - 8721.329: 92.1875% ( 33) 00:09:00.456 8721.329 - 8771.742: 92.3569% ( 31) 00:09:00.456 8771.742 - 8822.154: 92.5098% ( 28) 00:09:00.456 8822.154 - 8872.566: 92.6683% ( 29) 00:09:00.456 8872.566 - 8922.978: 92.7994% ( 24) 00:09:00.456 8922.978 - 8973.391: 92.9305% ( 24) 00:09:00.457 8973.391 - 9023.803: 93.0671% ( 25) 00:09:00.457 9023.803 - 9074.215: 93.2365% ( 31) 00:09:00.457 9074.215 - 9124.628: 93.4386% ( 37) 00:09:00.457 9124.628 - 9175.040: 93.6134% ( 32) 00:09:00.457 9175.040 - 9225.452: 93.7992% ( 34) 00:09:00.457 9225.452 - 9275.865: 93.9740% ( 32) 00:09:00.457 9275.865 - 9326.277: 94.1597% ( 34) 00:09:00.457 9326.277 - 9376.689: 94.3510% ( 35) 00:09:00.457 9376.689 - 9427.102: 94.5422% ( 35) 00:09:00.457 9427.102 - 9477.514: 94.6897% ( 27) 00:09:00.457 9477.514 - 9527.926: 94.8481% ( 29) 00:09:00.457 9527.926 - 9578.338: 94.9847% ( 25) 00:09:00.457 9578.338 - 9628.751: 95.1049% ( 22) 00:09:00.457 9628.751 - 9679.163: 95.2251% ( 22) 00:09:00.457 9679.163 - 9729.575: 95.3289% ( 19) 00:09:00.457 9729.575 - 9779.988: 95.4491% ( 22) 00:09:00.457 9779.988 - 9830.400: 95.5474% ( 18) 00:09:00.457 9830.400 - 9880.812: 95.6458% ( 18) 00:09:00.457 9880.812 - 9931.225: 95.7386% ( 17) 00:09:00.457 9931.225 - 9981.637: 95.8042% ( 12) 00:09:00.457 9981.637 - 10032.049: 95.8752% ( 13) 00:09:00.457 10032.049 - 10082.462: 95.9626% ( 16) 00:09:00.457 10082.462 - 10132.874: 96.0337% ( 13) 00:09:00.457 10132.874 - 10183.286: 96.1047% ( 13) 00:09:00.457 10183.286 - 10233.698: 96.1812% ( 14) 00:09:00.457 10233.698 - 10284.111: 96.2631% ( 15) 00:09:00.457 10284.111 - 10334.523: 96.3341% ( 13) 00:09:00.457 10334.523 - 10384.935: 96.4215% ( 16) 00:09:00.457 10384.935 - 10435.348: 96.4980% ( 14) 00:09:00.457 10435.348 - 10485.760: 96.5745% ( 14) 00:09:00.457 10485.760 - 10536.172: 96.6455% ( 13) 00:09:00.457 10536.172 - 10586.585: 96.7056% ( 11) 00:09:00.457 10586.585 - 10636.997: 96.7712% ( 12) 00:09:00.457 10636.997 - 10687.409: 96.8422% ( 13) 00:09:00.457 10687.409 - 10737.822: 96.9132% ( 13) 00:09:00.457 10737.822 - 10788.234: 96.9624% ( 9) 00:09:00.457 10788.234 - 10838.646: 97.0116% ( 9) 00:09:00.457 10838.646 - 10889.058: 97.0498% ( 7) 00:09:00.457 10889.058 - 10939.471: 97.0771% ( 5) 00:09:00.457 10939.471 - 10989.883: 97.1099% ( 6) 00:09:00.457 10989.883 - 11040.295: 97.1318% ( 4) 00:09:00.457 11040.295 - 11090.708: 97.1427% ( 2) 00:09:00.457 11090.708 - 11141.120: 97.1536% ( 2) 00:09:00.457 11141.120 - 11191.532: 97.1646% ( 2) 00:09:00.457 11191.532 - 11241.945: 97.1755% ( 2) 00:09:00.457 11241.945 - 11292.357: 97.1809% ( 1) 00:09:00.457 11292.357 - 11342.769: 97.1919% ( 2) 00:09:00.457 11342.769 - 11393.182: 97.2028% ( 2) 00:09:00.457 11645.243 - 11695.655: 97.2083% ( 1) 00:09:00.457 11695.655 - 11746.068: 97.2301% ( 4) 00:09:00.457 11746.068 - 11796.480: 97.2465% ( 3) 00:09:00.457 11796.480 - 11846.892: 97.2629% ( 3) 00:09:00.457 11846.892 - 11897.305: 97.2793% ( 3) 00:09:00.457 11897.305 - 11947.717: 97.3011% ( 4) 00:09:00.457 11947.717 - 11998.129: 97.3175% ( 3) 00:09:00.457 11998.129 - 12048.542: 97.3339% ( 3) 00:09:00.457 12048.542 - 12098.954: 97.3503% ( 3) 00:09:00.457 12098.954 - 12149.366: 97.3722% ( 4) 00:09:00.457 12149.366 - 12199.778: 97.3885% ( 3) 00:09:00.457 12199.778 - 12250.191: 97.4049% ( 3) 00:09:00.457 12250.191 - 12300.603: 97.4213% ( 3) 00:09:00.457 12300.603 - 12351.015: 97.4432% ( 4) 00:09:00.457 12351.015 - 12401.428: 97.4596% ( 3) 00:09:00.457 12401.428 - 12451.840: 97.4760% ( 3) 00:09:00.457 12451.840 - 12502.252: 97.4924% ( 3) 00:09:00.457 12502.252 - 12552.665: 97.5142% ( 4) 00:09:00.457 12552.665 - 12603.077: 97.5306% ( 3) 00:09:00.457 12603.077 - 12653.489: 97.5470% ( 3) 00:09:00.457 12653.489 - 12703.902: 97.5634% ( 3) 00:09:00.457 12703.902 - 12754.314: 97.5798% ( 3) 00:09:00.457 12754.314 - 12804.726: 97.5962% ( 3) 00:09:00.457 12804.726 - 12855.138: 97.6180% ( 4) 00:09:00.457 12855.138 - 12905.551: 97.6344% ( 3) 00:09:00.457 12905.551 - 13006.375: 97.6726% ( 7) 00:09:00.457 13006.375 - 13107.200: 97.7054% ( 6) 00:09:00.457 13107.200 - 13208.025: 97.7437% ( 7) 00:09:00.457 13208.025 - 13308.849: 97.7764% ( 6) 00:09:00.457 13308.849 - 13409.674: 97.8365% ( 11) 00:09:00.457 13409.674 - 13510.498: 97.8912% ( 10) 00:09:00.457 13510.498 - 13611.323: 97.9513% ( 11) 00:09:00.457 13611.323 - 13712.148: 97.9895% ( 7) 00:09:00.457 13712.148 - 13812.972: 98.0114% ( 4) 00:09:00.457 13812.972 - 13913.797: 98.0332% ( 4) 00:09:00.457 13913.797 - 14014.622: 98.0551% ( 4) 00:09:00.457 14014.622 - 14115.446: 98.0715% ( 3) 00:09:00.457 14115.446 - 14216.271: 98.0933% ( 4) 00:09:00.457 14216.271 - 14317.095: 98.1097% ( 3) 00:09:00.457 14317.095 - 14417.920: 98.1316% ( 4) 00:09:00.457 14417.920 - 14518.745: 98.1534% ( 4) 00:09:00.457 14518.745 - 14619.569: 98.1753% ( 4) 00:09:00.457 14619.569 - 14720.394: 98.1971% ( 4) 00:09:00.457 14720.394 - 14821.218: 98.2135% ( 3) 00:09:00.457 14821.218 - 14922.043: 98.2354% ( 4) 00:09:00.457 14922.043 - 15022.868: 98.2572% ( 4) 00:09:00.457 15022.868 - 15123.692: 98.2791% ( 4) 00:09:00.457 15123.692 - 15224.517: 98.3009% ( 4) 00:09:00.457 15224.517 - 15325.342: 98.3228% ( 4) 00:09:00.457 15325.342 - 15426.166: 98.3392% ( 3) 00:09:00.457 15426.166 - 15526.991: 98.3610% ( 4) 00:09:00.457 15526.991 - 15627.815: 98.3993% ( 7) 00:09:00.457 15627.815 - 15728.640: 98.4320% ( 6) 00:09:00.457 15728.640 - 15829.465: 98.4703% ( 7) 00:09:00.457 15829.465 - 15930.289: 98.5085% ( 7) 00:09:00.457 15930.289 - 16031.114: 98.5468% ( 7) 00:09:00.457 16031.114 - 16131.938: 98.5850% ( 7) 00:09:00.457 16131.938 - 16232.763: 98.6233% ( 7) 00:09:00.457 16232.763 - 16333.588: 98.6615% ( 7) 00:09:00.457 16333.588 - 16434.412: 98.7052% ( 8) 00:09:00.457 16434.412 - 16535.237: 98.7708% ( 12) 00:09:00.457 16535.237 - 16636.062: 98.8090% ( 7) 00:09:00.457 16636.062 - 16736.886: 98.8309% ( 4) 00:09:00.457 16736.886 - 16837.711: 98.8472% ( 3) 00:09:00.457 16837.711 - 16938.535: 98.8691% ( 4) 00:09:00.457 16938.535 - 17039.360: 98.8855% ( 3) 00:09:00.457 17039.360 - 17140.185: 98.9073% ( 4) 00:09:00.457 17140.185 - 17241.009: 98.9292% ( 4) 00:09:00.457 17241.009 - 17341.834: 98.9456% ( 3) 00:09:00.457 17341.834 - 17442.658: 98.9620% ( 3) 00:09:00.457 17442.658 - 17543.483: 98.9838% ( 4) 00:09:00.457 17543.483 - 17644.308: 99.0057% ( 4) 00:09:00.457 17644.308 - 17745.132: 99.0111% ( 1) 00:09:00.457 17745.132 - 17845.957: 99.0330% ( 4) 00:09:00.457 17845.957 - 17946.782: 99.0549% ( 4) 00:09:00.457 17946.782 - 18047.606: 99.0712% ( 3) 00:09:00.457 18047.606 - 18148.431: 99.0931% ( 4) 00:09:00.457 18148.431 - 18249.255: 99.1149% ( 4) 00:09:00.457 18249.255 - 18350.080: 99.1313% ( 3) 00:09:00.457 18350.080 - 18450.905: 99.1532% ( 4) 00:09:00.457 18450.905 - 18551.729: 99.1914% ( 7) 00:09:00.457 18551.729 - 18652.554: 99.2351% ( 8) 00:09:00.457 18652.554 - 18753.378: 99.2734% ( 7) 00:09:00.457 18753.378 - 18854.203: 99.3007% ( 5) 00:09:00.457 26819.348 - 27020.997: 99.3062% ( 1) 00:09:00.457 27020.997 - 27222.646: 99.3444% ( 7) 00:09:00.457 27222.646 - 27424.295: 99.3881% ( 8) 00:09:00.457 27424.295 - 27625.945: 99.4318% ( 8) 00:09:00.457 27625.945 - 27827.594: 99.4755% ( 8) 00:09:00.457 27827.594 - 28029.243: 99.5138% ( 7) 00:09:00.457 28029.243 - 28230.892: 99.5575% ( 8) 00:09:00.457 28230.892 - 28432.542: 99.5957% ( 7) 00:09:00.457 28432.542 - 28634.191: 99.6340% ( 7) 00:09:00.457 28634.191 - 28835.840: 99.6777% ( 8) 00:09:00.457 28835.840 - 29037.489: 99.7159% ( 7) 00:09:00.457 29037.489 - 29239.138: 99.7596% ( 8) 00:09:00.457 29239.138 - 29440.788: 99.8033% ( 8) 00:09:00.457 29440.788 - 29642.437: 99.8470% ( 8) 00:09:00.457 29642.437 - 29844.086: 99.8853% ( 7) 00:09:00.457 29844.086 - 30045.735: 99.9290% ( 8) 00:09:00.457 30045.735 - 30247.385: 99.9727% ( 8) 00:09:00.457 30247.385 - 30449.034: 100.0000% ( 5) 00:09:00.457 00:09:00.457 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:00.457 ============================================================================== 00:09:00.457 Range in us Cumulative IO count 00:09:00.457 5217.674 - 5242.880: 0.0109% ( 2) 00:09:00.457 5242.880 - 5268.086: 0.0492% ( 7) 00:09:00.457 5268.086 - 5293.292: 0.0765% ( 5) 00:09:00.457 5293.292 - 5318.498: 0.1858% ( 20) 00:09:00.457 5318.498 - 5343.705: 0.2841% ( 18) 00:09:00.457 5343.705 - 5368.911: 0.4207% ( 25) 00:09:00.457 5368.911 - 5394.117: 0.5955% ( 32) 00:09:00.457 5394.117 - 5419.323: 0.8359% ( 44) 00:09:00.457 5419.323 - 5444.529: 1.0981% ( 48) 00:09:00.457 5444.529 - 5469.735: 1.4532% ( 65) 00:09:00.457 5469.735 - 5494.942: 1.8684% ( 76) 00:09:00.457 5494.942 - 5520.148: 2.3110% ( 81) 00:09:00.457 5520.148 - 5545.354: 2.8191% ( 93) 00:09:00.457 5545.354 - 5570.560: 3.6222% ( 147) 00:09:00.457 5570.560 - 5595.766: 4.4362% ( 149) 00:09:00.457 5595.766 - 5620.972: 5.2775% ( 154) 00:09:00.457 5620.972 - 5646.178: 6.2063% ( 170) 00:09:00.457 5646.178 - 5671.385: 7.2389% ( 189) 00:09:00.457 5671.385 - 5696.591: 8.4517% ( 222) 00:09:00.457 5696.591 - 5721.797: 9.6099% ( 212) 00:09:00.457 5721.797 - 5747.003: 10.8118% ( 220) 00:09:00.457 5747.003 - 5772.209: 11.9045% ( 200) 00:09:00.457 5772.209 - 5797.415: 13.0190% ( 204) 00:09:00.457 5797.415 - 5822.622: 14.2045% ( 217) 00:09:00.457 5822.622 - 5847.828: 15.4065% ( 220) 00:09:00.457 5847.828 - 5873.034: 16.5374% ( 207) 00:09:00.457 5873.034 - 5898.240: 17.7611% ( 224) 00:09:00.457 5898.240 - 5923.446: 19.0013% ( 227) 00:09:00.457 5923.446 - 5948.652: 20.2142% ( 222) 00:09:00.457 5948.652 - 5973.858: 21.3778% ( 213) 00:09:00.457 5973.858 - 5999.065: 22.5743% ( 219) 00:09:00.457 5999.065 - 6024.271: 23.7817% ( 221) 00:09:00.458 6024.271 - 6049.477: 24.9017% ( 205) 00:09:00.458 6049.477 - 6074.683: 26.0599% ( 212) 00:09:00.458 6074.683 - 6099.889: 27.2509% ( 218) 00:09:00.458 6099.889 - 6125.095: 28.4965% ( 228) 00:09:00.458 6125.095 - 6150.302: 29.7585% ( 231) 00:09:00.458 6150.302 - 6175.508: 31.0315% ( 233) 00:09:00.458 6175.508 - 6200.714: 32.2880% ( 230) 00:09:00.458 6200.714 - 6225.920: 33.5118% ( 224) 00:09:00.458 6225.920 - 6251.126: 34.7192% ( 221) 00:09:00.458 6251.126 - 6276.332: 35.9703% ( 229) 00:09:00.458 6276.332 - 6301.538: 37.1941% ( 224) 00:09:00.458 6301.538 - 6326.745: 38.4124% ( 223) 00:09:00.458 6326.745 - 6351.951: 39.6471% ( 226) 00:09:00.458 6351.951 - 6377.157: 40.9091% ( 231) 00:09:00.458 6377.157 - 6402.363: 42.1602% ( 229) 00:09:00.458 6402.363 - 6427.569: 43.4058% ( 228) 00:09:00.458 6427.569 - 6452.775: 44.6951% ( 236) 00:09:00.458 6452.775 - 6503.188: 47.3066% ( 478) 00:09:00.458 6503.188 - 6553.600: 49.8306% ( 462) 00:09:00.458 6553.600 - 6604.012: 52.4312% ( 476) 00:09:00.458 6604.012 - 6654.425: 55.0481% ( 479) 00:09:00.458 6654.425 - 6704.837: 57.6213% ( 471) 00:09:00.458 6704.837 - 6755.249: 60.1945% ( 471) 00:09:00.458 6755.249 - 6805.662: 62.7458% ( 467) 00:09:00.458 6805.662 - 6856.074: 65.2371% ( 456) 00:09:00.458 6856.074 - 6906.486: 67.7229% ( 455) 00:09:00.458 6906.486 - 6956.898: 70.1213% ( 439) 00:09:00.458 6956.898 - 7007.311: 72.4049% ( 418) 00:09:00.458 7007.311 - 7057.723: 74.5684% ( 396) 00:09:00.458 7057.723 - 7108.135: 76.5516% ( 363) 00:09:00.458 7108.135 - 7158.548: 78.4583% ( 349) 00:09:00.458 7158.548 - 7208.960: 80.2885% ( 335) 00:09:00.458 7208.960 - 7259.372: 81.9493% ( 304) 00:09:00.458 7259.372 - 7309.785: 83.3643% ( 259) 00:09:00.458 7309.785 - 7360.197: 84.5553% ( 218) 00:09:00.458 7360.197 - 7410.609: 85.6042% ( 192) 00:09:00.458 7410.609 - 7461.022: 86.3964% ( 145) 00:09:00.458 7461.022 - 7511.434: 87.0247% ( 115) 00:09:00.458 7511.434 - 7561.846: 87.6038% ( 106) 00:09:00.458 7561.846 - 7612.258: 88.0354% ( 79) 00:09:00.458 7612.258 - 7662.671: 88.4233% ( 71) 00:09:00.458 7662.671 - 7713.083: 88.7402% ( 58) 00:09:00.458 7713.083 - 7763.495: 89.0516% ( 57) 00:09:00.458 7763.495 - 7813.908: 89.3630% ( 57) 00:09:00.458 7813.908 - 7864.320: 89.6635% ( 55) 00:09:00.458 7864.320 - 7914.732: 89.9202% ( 47) 00:09:00.458 7914.732 - 7965.145: 90.1060% ( 34) 00:09:00.458 7965.145 - 8015.557: 90.2972% ( 35) 00:09:00.458 8015.557 - 8065.969: 90.4775% ( 33) 00:09:00.458 8065.969 - 8116.382: 90.6742% ( 36) 00:09:00.458 8116.382 - 8166.794: 90.8490% ( 32) 00:09:00.458 8166.794 - 8217.206: 91.0293% ( 33) 00:09:00.458 8217.206 - 8267.618: 91.2096% ( 33) 00:09:00.458 8267.618 - 8318.031: 91.3680% ( 29) 00:09:00.458 8318.031 - 8368.443: 91.5046% ( 25) 00:09:00.458 8368.443 - 8418.855: 91.6248% ( 22) 00:09:00.458 8418.855 - 8469.268: 91.7122% ( 16) 00:09:00.458 8469.268 - 8519.680: 91.8160% ( 19) 00:09:00.458 8519.680 - 8570.092: 91.8979% ( 15) 00:09:00.458 8570.092 - 8620.505: 91.9908% ( 17) 00:09:00.458 8620.505 - 8670.917: 92.0673% ( 14) 00:09:00.458 8670.917 - 8721.329: 92.1656% ( 18) 00:09:00.458 8721.329 - 8771.742: 92.2531% ( 16) 00:09:00.458 8771.742 - 8822.154: 92.3405% ( 16) 00:09:00.458 8822.154 - 8872.566: 92.4224% ( 15) 00:09:00.458 8872.566 - 8922.978: 92.5044% ( 15) 00:09:00.458 8922.978 - 8973.391: 92.6027% ( 18) 00:09:00.458 8973.391 - 9023.803: 92.6792% ( 14) 00:09:00.458 9023.803 - 9074.215: 92.7557% ( 14) 00:09:00.458 9074.215 - 9124.628: 92.8158% ( 11) 00:09:00.458 9124.628 - 9175.040: 92.8868% ( 13) 00:09:00.458 9175.040 - 9225.452: 92.9688% ( 15) 00:09:00.458 9225.452 - 9275.865: 93.0726% ( 19) 00:09:00.458 9275.865 - 9326.277: 93.1709% ( 18) 00:09:00.458 9326.277 - 9376.689: 93.2747% ( 19) 00:09:00.458 9376.689 - 9427.102: 93.3730% ( 18) 00:09:00.458 9427.102 - 9477.514: 93.4714% ( 18) 00:09:00.458 9477.514 - 9527.926: 93.5752% ( 19) 00:09:00.458 9527.926 - 9578.338: 93.6790% ( 19) 00:09:00.458 9578.338 - 9628.751: 93.7828% ( 19) 00:09:00.458 9628.751 - 9679.163: 93.8811% ( 18) 00:09:00.458 9679.163 - 9729.575: 93.9795% ( 18) 00:09:00.458 9729.575 - 9779.988: 94.0833% ( 19) 00:09:00.458 9779.988 - 9830.400: 94.1925% ( 20) 00:09:00.458 9830.400 - 9880.812: 94.2909% ( 18) 00:09:00.458 9880.812 - 9931.225: 94.3892% ( 18) 00:09:00.458 9931.225 - 9981.637: 94.5039% ( 21) 00:09:00.458 9981.637 - 10032.049: 94.5913% ( 16) 00:09:00.458 10032.049 - 10082.462: 94.6733% ( 15) 00:09:00.458 10082.462 - 10132.874: 94.7662% ( 17) 00:09:00.458 10132.874 - 10183.286: 94.8481% ( 15) 00:09:00.458 10183.286 - 10233.698: 94.9410% ( 17) 00:09:00.458 10233.698 - 10284.111: 95.0066% ( 12) 00:09:00.458 10284.111 - 10334.523: 95.0721% ( 12) 00:09:00.458 10334.523 - 10384.935: 95.1431% ( 13) 00:09:00.458 10384.935 - 10435.348: 95.2087% ( 12) 00:09:00.458 10435.348 - 10485.760: 95.2852% ( 14) 00:09:00.458 10485.760 - 10536.172: 95.3507% ( 12) 00:09:00.458 10536.172 - 10586.585: 95.4218% ( 13) 00:09:00.458 10586.585 - 10636.997: 95.5037% ( 15) 00:09:00.458 10636.997 - 10687.409: 95.5693% ( 12) 00:09:00.458 10687.409 - 10737.822: 95.6512% ( 15) 00:09:00.458 10737.822 - 10788.234: 95.7332% ( 15) 00:09:00.458 10788.234 - 10838.646: 95.8151% ( 15) 00:09:00.458 10838.646 - 10889.058: 95.8971% ( 15) 00:09:00.458 10889.058 - 10939.471: 95.9681% ( 13) 00:09:00.458 10939.471 - 10989.883: 96.0009% ( 6) 00:09:00.458 10989.883 - 11040.295: 96.0337% ( 6) 00:09:00.458 11040.295 - 11090.708: 96.0828% ( 9) 00:09:00.458 11090.708 - 11141.120: 96.1320% ( 9) 00:09:00.458 11141.120 - 11191.532: 96.1757% ( 8) 00:09:00.458 11191.532 - 11241.945: 96.2249% ( 9) 00:09:00.458 11241.945 - 11292.357: 96.2740% ( 9) 00:09:00.458 11292.357 - 11342.769: 96.3232% ( 9) 00:09:00.458 11342.769 - 11393.182: 96.3615% ( 7) 00:09:00.458 11393.182 - 11443.594: 96.4161% ( 10) 00:09:00.458 11443.594 - 11494.006: 96.4598% ( 8) 00:09:00.458 11494.006 - 11544.418: 96.4980% ( 7) 00:09:00.458 11544.418 - 11594.831: 96.5417% ( 8) 00:09:00.458 11594.831 - 11645.243: 96.5964% ( 10) 00:09:00.458 11645.243 - 11695.655: 96.6401% ( 8) 00:09:00.458 11695.655 - 11746.068: 96.7056% ( 12) 00:09:00.458 11746.068 - 11796.480: 96.7712% ( 12) 00:09:00.458 11796.480 - 11846.892: 96.8204% ( 9) 00:09:00.458 11846.892 - 11897.305: 96.8586% ( 7) 00:09:00.458 11897.305 - 11947.717: 96.9078% ( 9) 00:09:00.458 11947.717 - 11998.129: 96.9515% ( 8) 00:09:00.458 11998.129 - 12048.542: 96.9952% ( 8) 00:09:00.458 12048.542 - 12098.954: 97.0444% ( 9) 00:09:00.458 12098.954 - 12149.366: 97.0881% ( 8) 00:09:00.458 12149.366 - 12199.778: 97.1372% ( 9) 00:09:00.458 12199.778 - 12250.191: 97.1864% ( 9) 00:09:00.458 12250.191 - 12300.603: 97.2356% ( 9) 00:09:00.458 12300.603 - 12351.015: 97.2738% ( 7) 00:09:00.458 12351.015 - 12401.428: 97.3339% ( 11) 00:09:00.458 12401.428 - 12451.840: 97.3831% ( 9) 00:09:00.458 12451.840 - 12502.252: 97.4213% ( 7) 00:09:00.458 12502.252 - 12552.665: 97.4596% ( 7) 00:09:00.458 12552.665 - 12603.077: 97.4924% ( 6) 00:09:00.458 12603.077 - 12653.489: 97.5306% ( 7) 00:09:00.458 12653.489 - 12703.902: 97.5579% ( 5) 00:09:00.458 12703.902 - 12754.314: 97.5743% ( 3) 00:09:00.458 12754.314 - 12804.726: 97.5907% ( 3) 00:09:00.458 12804.726 - 12855.138: 97.6125% ( 4) 00:09:00.458 12855.138 - 12905.551: 97.6289% ( 3) 00:09:00.458 12905.551 - 13006.375: 97.6617% ( 6) 00:09:00.458 13006.375 - 13107.200: 97.7000% ( 7) 00:09:00.458 13107.200 - 13208.025: 97.7382% ( 7) 00:09:00.458 13208.025 - 13308.849: 97.7710% ( 6) 00:09:00.458 13308.849 - 13409.674: 97.8092% ( 7) 00:09:00.458 13409.674 - 13510.498: 97.8420% ( 6) 00:09:00.458 13510.498 - 13611.323: 97.8802% ( 7) 00:09:00.458 13611.323 - 13712.148: 97.9021% ( 4) 00:09:00.458 14216.271 - 14317.095: 97.9677% ( 12) 00:09:00.458 14317.095 - 14417.920: 97.9895% ( 4) 00:09:00.458 14417.920 - 14518.745: 98.0496% ( 11) 00:09:00.458 14518.745 - 14619.569: 98.1370% ( 16) 00:09:00.458 14619.569 - 14720.394: 98.2190% ( 15) 00:09:00.458 14720.394 - 14821.218: 98.3009% ( 15) 00:09:00.458 14821.218 - 14922.043: 98.3774% ( 14) 00:09:00.458 14922.043 - 15022.868: 98.4594% ( 15) 00:09:00.458 15022.868 - 15123.692: 98.5413% ( 15) 00:09:00.458 15123.692 - 15224.517: 98.6233% ( 15) 00:09:00.458 15224.517 - 15325.342: 98.7052% ( 15) 00:09:00.458 15325.342 - 15426.166: 98.7872% ( 15) 00:09:00.458 15426.166 - 15526.991: 98.8691% ( 15) 00:09:00.458 15526.991 - 15627.815: 98.9510% ( 15) 00:09:00.458 15627.815 - 15728.640: 99.0275% ( 14) 00:09:00.458 15728.640 - 15829.465: 99.1095% ( 15) 00:09:00.458 15829.465 - 15930.289: 99.1860% ( 14) 00:09:00.458 15930.289 - 16031.114: 99.2188% ( 6) 00:09:00.458 16031.114 - 16131.938: 99.2406% ( 4) 00:09:00.458 16131.938 - 16232.763: 99.2679% ( 5) 00:09:00.458 16232.763 - 16333.588: 99.2898% ( 4) 00:09:00.458 16333.588 - 16434.412: 99.3007% ( 2) 00:09:00.458 26214.400 - 26416.049: 99.3171% ( 3) 00:09:00.458 26416.049 - 26617.698: 99.3608% ( 8) 00:09:00.458 26617.698 - 26819.348: 99.3990% ( 7) 00:09:00.458 26819.348 - 27020.997: 99.4373% ( 7) 00:09:00.458 27020.997 - 27222.646: 99.4810% ( 8) 00:09:00.458 27222.646 - 27424.295: 99.5247% ( 8) 00:09:00.458 27424.295 - 27625.945: 99.5629% ( 7) 00:09:00.458 27625.945 - 27827.594: 99.6066% ( 8) 00:09:00.458 27827.594 - 28029.243: 99.6449% ( 7) 00:09:00.458 28029.243 - 28230.892: 99.6831% ( 7) 00:09:00.458 28230.892 - 28432.542: 99.7214% ( 7) 00:09:00.458 28432.542 - 28634.191: 99.7651% ( 8) 00:09:00.459 28634.191 - 28835.840: 99.8033% ( 7) 00:09:00.459 28835.840 - 29037.489: 99.8470% ( 8) 00:09:00.459 29037.489 - 29239.138: 99.8907% ( 8) 00:09:00.459 29239.138 - 29440.788: 99.9344% ( 8) 00:09:00.459 29440.788 - 29642.437: 99.9727% ( 7) 00:09:00.459 29642.437 - 29844.086: 100.0000% ( 5) 00:09:00.459 00:09:00.459 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:00.459 ============================================================================== 00:09:00.459 Range in us Cumulative IO count 00:09:00.459 5242.880 - 5268.086: 0.0546% ( 10) 00:09:00.459 5268.086 - 5293.292: 0.1147% ( 11) 00:09:00.459 5293.292 - 5318.498: 0.2021% ( 16) 00:09:00.459 5318.498 - 5343.705: 0.3442% ( 26) 00:09:00.459 5343.705 - 5368.911: 0.4808% ( 25) 00:09:00.459 5368.911 - 5394.117: 0.6556% ( 32) 00:09:00.459 5394.117 - 5419.323: 0.9615% ( 56) 00:09:00.459 5419.323 - 5444.529: 1.3440% ( 70) 00:09:00.459 5444.529 - 5469.735: 1.5898% ( 45) 00:09:00.459 5469.735 - 5494.942: 1.8848% ( 54) 00:09:00.459 5494.942 - 5520.148: 2.3383% ( 83) 00:09:00.459 5520.148 - 5545.354: 2.8518% ( 94) 00:09:00.459 5545.354 - 5570.560: 3.5785% ( 133) 00:09:00.459 5570.560 - 5595.766: 4.4471% ( 159) 00:09:00.459 5595.766 - 5620.972: 5.4032% ( 175) 00:09:00.459 5620.972 - 5646.178: 6.3101% ( 166) 00:09:00.459 5646.178 - 5671.385: 7.4137% ( 202) 00:09:00.459 5671.385 - 5696.591: 8.6211% ( 221) 00:09:00.459 5696.591 - 5721.797: 9.6919% ( 196) 00:09:00.459 5721.797 - 5747.003: 10.7517% ( 194) 00:09:00.459 5747.003 - 5772.209: 11.7679% ( 186) 00:09:00.459 5772.209 - 5797.415: 12.8497% ( 198) 00:09:00.459 5797.415 - 5822.622: 14.0461% ( 219) 00:09:00.459 5822.622 - 5847.828: 15.2153% ( 214) 00:09:00.459 5847.828 - 5873.034: 16.4062% ( 218) 00:09:00.459 5873.034 - 5898.240: 17.5863% ( 216) 00:09:00.459 5898.240 - 5923.446: 18.7172% ( 207) 00:09:00.459 5923.446 - 5948.652: 19.9028% ( 217) 00:09:00.459 5948.652 - 5973.858: 21.1211% ( 223) 00:09:00.459 5973.858 - 5999.065: 22.3011% ( 216) 00:09:00.459 5999.065 - 6024.271: 23.4867% ( 217) 00:09:00.459 6024.271 - 6049.477: 24.6886% ( 220) 00:09:00.459 6049.477 - 6074.683: 25.9615% ( 233) 00:09:00.459 6074.683 - 6099.889: 27.1744% ( 222) 00:09:00.459 6099.889 - 6125.095: 28.4528% ( 234) 00:09:00.459 6125.095 - 6150.302: 29.6438% ( 218) 00:09:00.459 6150.302 - 6175.508: 30.8676% ( 224) 00:09:00.459 6175.508 - 6200.714: 32.0804% ( 222) 00:09:00.459 6200.714 - 6225.920: 33.3260% ( 228) 00:09:00.459 6225.920 - 6251.126: 34.6099% ( 235) 00:09:00.459 6251.126 - 6276.332: 35.8282% ( 223) 00:09:00.459 6276.332 - 6301.538: 37.0411% ( 222) 00:09:00.459 6301.538 - 6326.745: 38.2594% ( 223) 00:09:00.459 6326.745 - 6351.951: 39.5323% ( 233) 00:09:00.459 6351.951 - 6377.157: 40.7397% ( 221) 00:09:00.459 6377.157 - 6402.363: 42.0017% ( 231) 00:09:00.459 6402.363 - 6427.569: 43.3075% ( 239) 00:09:00.459 6427.569 - 6452.775: 44.5913% ( 235) 00:09:00.459 6452.775 - 6503.188: 47.0826% ( 456) 00:09:00.459 6503.188 - 6553.600: 49.6831% ( 476) 00:09:00.459 6553.600 - 6604.012: 52.2454% ( 469) 00:09:00.459 6604.012 - 6654.425: 54.8295% ( 473) 00:09:00.459 6654.425 - 6704.837: 57.3645% ( 464) 00:09:00.459 6704.837 - 6755.249: 59.9924% ( 481) 00:09:00.459 6755.249 - 6805.662: 62.5656% ( 471) 00:09:00.459 6805.662 - 6856.074: 65.1169% ( 467) 00:09:00.459 6856.074 - 6906.486: 67.6082% ( 456) 00:09:00.459 6906.486 - 6956.898: 70.0612% ( 449) 00:09:00.459 6956.898 - 7007.311: 72.3394% ( 417) 00:09:00.459 7007.311 - 7057.723: 74.4701% ( 390) 00:09:00.459 7057.723 - 7108.135: 76.5024% ( 372) 00:09:00.459 7108.135 - 7158.548: 78.4747% ( 361) 00:09:00.459 7158.548 - 7208.960: 80.3486% ( 343) 00:09:00.459 7208.960 - 7259.372: 82.0640% ( 314) 00:09:00.459 7259.372 - 7309.785: 83.5610% ( 274) 00:09:00.459 7309.785 - 7360.197: 84.8667% ( 239) 00:09:00.459 7360.197 - 7410.609: 85.9266% ( 194) 00:09:00.459 7410.609 - 7461.022: 86.7297% ( 147) 00:09:00.459 7461.022 - 7511.434: 87.4290% ( 128) 00:09:00.459 7511.434 - 7561.846: 88.0135% ( 107) 00:09:00.459 7561.846 - 7612.258: 88.4670% ( 83) 00:09:00.459 7612.258 - 7662.671: 88.8440% ( 69) 00:09:00.459 7662.671 - 7713.083: 89.1882% ( 63) 00:09:00.459 7713.083 - 7763.495: 89.4832% ( 54) 00:09:00.459 7763.495 - 7813.908: 89.7563% ( 50) 00:09:00.459 7813.908 - 7864.320: 90.0076% ( 46) 00:09:00.459 7864.320 - 7914.732: 90.2371% ( 42) 00:09:00.459 7914.732 - 7965.145: 90.4775% ( 44) 00:09:00.459 7965.145 - 8015.557: 90.7124% ( 43) 00:09:00.459 8015.557 - 8065.969: 90.9583% ( 45) 00:09:00.459 8065.969 - 8116.382: 91.1659% ( 38) 00:09:00.459 8116.382 - 8166.794: 91.3899% ( 41) 00:09:00.459 8166.794 - 8217.206: 91.5975% ( 38) 00:09:00.459 8217.206 - 8267.618: 91.7941% ( 36) 00:09:00.459 8267.618 - 8318.031: 91.9198% ( 23) 00:09:00.459 8318.031 - 8368.443: 92.0400% ( 22) 00:09:00.459 8368.443 - 8418.855: 92.1219% ( 15) 00:09:00.459 8418.855 - 8469.268: 92.2039% ( 15) 00:09:00.459 8469.268 - 8519.680: 92.2804% ( 14) 00:09:00.459 8519.680 - 8570.092: 92.3733% ( 17) 00:09:00.459 8570.092 - 8620.505: 92.4497% ( 14) 00:09:00.459 8620.505 - 8670.917: 92.5372% ( 16) 00:09:00.459 8670.917 - 8721.329: 92.6136% ( 14) 00:09:00.459 8721.329 - 8771.742: 92.6847% ( 13) 00:09:00.459 8771.742 - 8822.154: 92.7338% ( 9) 00:09:00.459 8822.154 - 8872.566: 92.7775% ( 8) 00:09:00.459 8872.566 - 8922.978: 92.8158% ( 7) 00:09:00.459 8922.978 - 8973.391: 92.8486% ( 6) 00:09:00.459 8973.391 - 9023.803: 92.8813% ( 6) 00:09:00.459 9023.803 - 9074.215: 92.9305% ( 9) 00:09:00.459 9074.215 - 9124.628: 92.9688% ( 7) 00:09:00.459 9124.628 - 9175.040: 93.0125% ( 8) 00:09:00.459 9175.040 - 9225.452: 93.0507% ( 7) 00:09:00.459 9225.452 - 9275.865: 93.0889% ( 7) 00:09:00.459 9275.865 - 9326.277: 93.1545% ( 12) 00:09:00.459 9326.277 - 9376.689: 93.1982% ( 8) 00:09:00.459 9376.689 - 9427.102: 93.2365% ( 7) 00:09:00.459 9427.102 - 9477.514: 93.2747% ( 7) 00:09:00.459 9477.514 - 9527.926: 93.3129% ( 7) 00:09:00.459 9527.926 - 9578.338: 93.3457% ( 6) 00:09:00.459 9578.338 - 9628.751: 93.3894% ( 8) 00:09:00.459 9628.751 - 9679.163: 93.4277% ( 7) 00:09:00.459 9679.163 - 9729.575: 93.4714% ( 8) 00:09:00.459 9729.575 - 9779.988: 93.5042% ( 6) 00:09:00.459 9779.988 - 9830.400: 93.5479% ( 8) 00:09:00.459 9830.400 - 9880.812: 93.5861% ( 7) 00:09:00.459 9880.812 - 9931.225: 93.6462% ( 11) 00:09:00.459 9931.225 - 9981.637: 93.7063% ( 11) 00:09:00.459 9981.637 - 10032.049: 93.7609% ( 10) 00:09:00.459 10032.049 - 10082.462: 93.8210% ( 11) 00:09:00.459 10082.462 - 10132.874: 93.8866% ( 12) 00:09:00.459 10132.874 - 10183.286: 93.9467% ( 11) 00:09:00.459 10183.286 - 10233.698: 94.0068% ( 11) 00:09:00.459 10233.698 - 10284.111: 94.0669% ( 11) 00:09:00.459 10284.111 - 10334.523: 94.1215% ( 10) 00:09:00.459 10334.523 - 10384.935: 94.1816% ( 11) 00:09:00.459 10384.935 - 10435.348: 94.2472% ( 12) 00:09:00.459 10435.348 - 10485.760: 94.3073% ( 11) 00:09:00.459 10485.760 - 10536.172: 94.3674% ( 11) 00:09:00.459 10536.172 - 10586.585: 94.4220% ( 10) 00:09:00.459 10586.585 - 10636.997: 94.5039% ( 15) 00:09:00.459 10636.997 - 10687.409: 94.5968% ( 17) 00:09:00.459 10687.409 - 10737.822: 94.7006% ( 19) 00:09:00.459 10737.822 - 10788.234: 94.7771% ( 14) 00:09:00.459 10788.234 - 10838.646: 94.8590% ( 15) 00:09:00.459 10838.646 - 10889.058: 94.9246% ( 12) 00:09:00.459 10889.058 - 10939.471: 95.0066% ( 15) 00:09:00.459 10939.471 - 10989.883: 95.0830% ( 14) 00:09:00.459 10989.883 - 11040.295: 95.1759% ( 17) 00:09:00.459 11040.295 - 11090.708: 95.2743% ( 18) 00:09:00.459 11090.708 - 11141.120: 95.3562% ( 15) 00:09:00.459 11141.120 - 11191.532: 95.4436% ( 16) 00:09:00.459 11191.532 - 11241.945: 95.5256% ( 15) 00:09:00.460 11241.945 - 11292.357: 95.5966% ( 13) 00:09:00.460 11292.357 - 11342.769: 95.6676% ( 13) 00:09:00.460 11342.769 - 11393.182: 95.7441% ( 14) 00:09:00.460 11393.182 - 11443.594: 95.8151% ( 13) 00:09:00.460 11443.594 - 11494.006: 95.8916% ( 14) 00:09:00.460 11494.006 - 11544.418: 95.9681% ( 14) 00:09:00.460 11544.418 - 11594.831: 96.0500% ( 15) 00:09:00.460 11594.831 - 11645.243: 96.1047% ( 10) 00:09:00.460 11645.243 - 11695.655: 96.1648% ( 11) 00:09:00.460 11695.655 - 11746.068: 96.2249% ( 11) 00:09:00.460 11746.068 - 11796.480: 96.2904% ( 12) 00:09:00.460 11796.480 - 11846.892: 96.3560% ( 12) 00:09:00.460 11846.892 - 11897.305: 96.4215% ( 12) 00:09:00.460 11897.305 - 11947.717: 96.4816% ( 11) 00:09:00.460 11947.717 - 11998.129: 96.5581% ( 14) 00:09:00.460 11998.129 - 12048.542: 96.6182% ( 11) 00:09:00.460 12048.542 - 12098.954: 96.6783% ( 11) 00:09:00.460 12098.954 - 12149.366: 96.7493% ( 13) 00:09:00.460 12149.366 - 12199.778: 96.8313% ( 15) 00:09:00.460 12199.778 - 12250.191: 96.9023% ( 13) 00:09:00.460 12250.191 - 12300.603: 96.9733% ( 13) 00:09:00.460 12300.603 - 12351.015: 97.0389% ( 12) 00:09:00.460 12351.015 - 12401.428: 97.1208% ( 15) 00:09:00.460 12401.428 - 12451.840: 97.1919% ( 13) 00:09:00.460 12451.840 - 12502.252: 97.2520% ( 11) 00:09:00.460 12502.252 - 12552.665: 97.3121% ( 11) 00:09:00.460 12552.665 - 12603.077: 97.3667% ( 10) 00:09:00.460 12603.077 - 12653.489: 97.4159% ( 9) 00:09:00.460 12653.489 - 12703.902: 97.4596% ( 8) 00:09:00.460 12703.902 - 12754.314: 97.4978% ( 7) 00:09:00.460 12754.314 - 12804.726: 97.5524% ( 10) 00:09:00.460 12804.726 - 12855.138: 97.5962% ( 8) 00:09:00.460 12855.138 - 12905.551: 97.6508% ( 10) 00:09:00.460 12905.551 - 13006.375: 97.7382% ( 16) 00:09:00.460 13006.375 - 13107.200: 97.7983% ( 11) 00:09:00.460 13107.200 - 13208.025: 97.8584% ( 11) 00:09:00.460 13208.025 - 13308.849: 97.9240% ( 12) 00:09:00.460 13308.849 - 13409.674: 97.9786% ( 10) 00:09:00.460 13409.674 - 13510.498: 98.0332% ( 10) 00:09:00.460 13510.498 - 13611.323: 98.0933% ( 11) 00:09:00.460 13611.323 - 13712.148: 98.1479% ( 10) 00:09:00.460 13712.148 - 13812.972: 98.2135% ( 12) 00:09:00.460 13812.972 - 13913.797: 98.2681% ( 10) 00:09:00.460 13913.797 - 14014.622: 98.3337% ( 12) 00:09:00.460 14014.622 - 14115.446: 98.3993% ( 12) 00:09:00.460 14115.446 - 14216.271: 98.4594% ( 11) 00:09:00.460 14216.271 - 14317.095: 98.5140% ( 10) 00:09:00.460 14317.095 - 14417.920: 98.5795% ( 12) 00:09:00.460 14417.920 - 14518.745: 98.6287% ( 9) 00:09:00.460 14518.745 - 14619.569: 98.6888% ( 11) 00:09:00.460 14619.569 - 14720.394: 98.7489% ( 11) 00:09:00.460 14720.394 - 14821.218: 98.8090% ( 11) 00:09:00.460 14821.218 - 14922.043: 98.8691% ( 11) 00:09:00.460 14922.043 - 15022.868: 98.9237% ( 10) 00:09:00.460 15022.868 - 15123.692: 98.9893% ( 12) 00:09:00.460 15123.692 - 15224.517: 99.0494% ( 11) 00:09:00.460 15224.517 - 15325.342: 99.0931% ( 8) 00:09:00.460 15325.342 - 15426.166: 99.1368% ( 8) 00:09:00.460 15426.166 - 15526.991: 99.1805% ( 8) 00:09:00.460 15526.991 - 15627.815: 99.2024% ( 4) 00:09:00.460 15627.815 - 15728.640: 99.2188% ( 3) 00:09:00.460 15728.640 - 15829.465: 99.2406% ( 4) 00:09:00.460 15829.465 - 15930.289: 99.2625% ( 4) 00:09:00.460 15930.289 - 16031.114: 99.2788% ( 3) 00:09:00.460 16031.114 - 16131.938: 99.2952% ( 3) 00:09:00.460 16131.938 - 16232.763: 99.3007% ( 1) 00:09:00.460 24802.855 - 24903.680: 99.3171% ( 3) 00:09:00.460 24903.680 - 25004.505: 99.3389% ( 4) 00:09:00.460 25004.505 - 25105.329: 99.3608% ( 4) 00:09:00.460 25105.329 - 25206.154: 99.3772% ( 3) 00:09:00.460 25206.154 - 25306.978: 99.3990% ( 4) 00:09:00.460 25306.978 - 25407.803: 99.4209% ( 4) 00:09:00.460 25407.803 - 25508.628: 99.4427% ( 4) 00:09:00.460 25508.628 - 25609.452: 99.4591% ( 3) 00:09:00.460 25609.452 - 25710.277: 99.4810% ( 4) 00:09:00.460 25710.277 - 25811.102: 99.5028% ( 4) 00:09:00.460 25811.102 - 26012.751: 99.5465% ( 8) 00:09:00.460 26012.751 - 26214.400: 99.5793% ( 6) 00:09:00.460 26214.400 - 26416.049: 99.6230% ( 8) 00:09:00.460 26416.049 - 26617.698: 99.6667% ( 8) 00:09:00.460 26617.698 - 26819.348: 99.7104% ( 8) 00:09:00.460 26819.348 - 27020.997: 99.7542% ( 8) 00:09:00.460 27020.997 - 27222.646: 99.7924% ( 7) 00:09:00.460 27222.646 - 27424.295: 99.8361% ( 8) 00:09:00.460 27424.295 - 27625.945: 99.8689% ( 6) 00:09:00.460 27625.945 - 27827.594: 99.9071% ( 7) 00:09:00.460 27827.594 - 28029.243: 99.9508% ( 8) 00:09:00.460 28029.243 - 28230.892: 99.9945% ( 8) 00:09:00.460 28230.892 - 28432.542: 100.0000% ( 1) 00:09:00.460 00:09:00.460 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:00.460 ============================================================================== 00:09:00.460 Range in us Cumulative IO count 00:09:00.460 5217.674 - 5242.880: 0.0164% ( 3) 00:09:00.460 5242.880 - 5268.086: 0.0492% ( 6) 00:09:00.460 5268.086 - 5293.292: 0.1639% ( 21) 00:09:00.460 5293.292 - 5318.498: 0.2513% ( 16) 00:09:00.460 5318.498 - 5343.705: 0.3551% ( 19) 00:09:00.460 5343.705 - 5368.911: 0.5409% ( 34) 00:09:00.460 5368.911 - 5394.117: 0.8195% ( 51) 00:09:00.460 5394.117 - 5419.323: 1.0435% ( 41) 00:09:00.460 5419.323 - 5444.529: 1.2675% ( 41) 00:09:00.460 5444.529 - 5469.735: 1.5625% ( 54) 00:09:00.460 5469.735 - 5494.942: 1.8630% ( 55) 00:09:00.460 5494.942 - 5520.148: 2.3000% ( 80) 00:09:00.460 5520.148 - 5545.354: 3.1523% ( 156) 00:09:00.460 5545.354 - 5570.560: 3.9882% ( 153) 00:09:00.460 5570.560 - 5595.766: 4.9060% ( 168) 00:09:00.460 5595.766 - 5620.972: 5.8020% ( 164) 00:09:00.460 5620.972 - 5646.178: 6.5723% ( 141) 00:09:00.460 5646.178 - 5671.385: 7.3973% ( 151) 00:09:00.460 5671.385 - 5696.591: 8.4517% ( 193) 00:09:00.460 5696.591 - 5721.797: 9.5990% ( 210) 00:09:00.460 5721.797 - 5747.003: 10.7900% ( 218) 00:09:00.460 5747.003 - 5772.209: 11.8936% ( 202) 00:09:00.460 5772.209 - 5797.415: 13.0518% ( 212) 00:09:00.460 5797.415 - 5822.622: 14.1390% ( 199) 00:09:00.460 5822.622 - 5847.828: 15.2808% ( 209) 00:09:00.460 5847.828 - 5873.034: 16.3844% ( 202) 00:09:00.460 5873.034 - 5898.240: 17.5426% ( 212) 00:09:00.460 5898.240 - 5923.446: 18.6899% ( 210) 00:09:00.460 5923.446 - 5948.652: 19.8590% ( 214) 00:09:00.460 5948.652 - 5973.858: 21.0118% ( 211) 00:09:00.460 5973.858 - 5999.065: 22.1973% ( 217) 00:09:00.460 5999.065 - 6024.271: 23.3446% ( 210) 00:09:00.460 6024.271 - 6049.477: 24.5192% ( 215) 00:09:00.460 6049.477 - 6074.683: 25.7212% ( 220) 00:09:00.460 6074.683 - 6099.889: 26.9449% ( 224) 00:09:00.460 6099.889 - 6125.095: 28.1086% ( 213) 00:09:00.460 6125.095 - 6150.302: 29.2941% ( 217) 00:09:00.460 6150.302 - 6175.508: 30.4906% ( 219) 00:09:00.460 6175.508 - 6200.714: 31.7417% ( 229) 00:09:00.460 6200.714 - 6225.920: 32.8671% ( 206) 00:09:00.460 6225.920 - 6251.126: 34.0909% ( 224) 00:09:00.460 6251.126 - 6276.332: 35.2928% ( 220) 00:09:00.460 6276.332 - 6301.538: 36.5221% ( 225) 00:09:00.460 6301.538 - 6326.745: 37.7076% ( 217) 00:09:00.460 6326.745 - 6351.951: 38.9642% ( 230) 00:09:00.460 6351.951 - 6377.157: 40.2098% ( 228) 00:09:00.460 6377.157 - 6402.363: 41.4663% ( 230) 00:09:00.460 6402.363 - 6427.569: 42.6792% ( 222) 00:09:00.460 6427.569 - 6452.775: 43.9248% ( 228) 00:09:00.460 6452.775 - 6503.188: 46.3942% ( 452) 00:09:00.460 6503.188 - 6553.600: 48.8527% ( 450) 00:09:00.460 6553.600 - 6604.012: 51.3385% ( 455) 00:09:00.460 6604.012 - 6654.425: 53.8462% ( 459) 00:09:00.460 6654.425 - 6704.837: 56.2992% ( 449) 00:09:00.460 6704.837 - 6755.249: 58.7576% ( 450) 00:09:00.460 6755.249 - 6805.662: 61.2434% ( 455) 00:09:00.460 6805.662 - 6856.074: 63.8221% ( 472) 00:09:00.460 6856.074 - 6906.486: 66.2150% ( 438) 00:09:00.460 6906.486 - 6956.898: 68.6407% ( 444) 00:09:00.460 6956.898 - 7007.311: 70.9025% ( 414) 00:09:00.460 7007.311 - 7057.723: 73.0114% ( 386) 00:09:00.460 7057.723 - 7108.135: 75.0382% ( 371) 00:09:00.460 7108.135 - 7158.548: 77.0160% ( 362) 00:09:00.460 7158.548 - 7208.960: 78.8735% ( 340) 00:09:00.460 7208.960 - 7259.372: 80.5944% ( 315) 00:09:00.460 7259.372 - 7309.785: 82.1296% ( 281) 00:09:00.460 7309.785 - 7360.197: 83.5610% ( 262) 00:09:00.460 7360.197 - 7410.609: 84.6973% ( 208) 00:09:00.460 7410.609 - 7461.022: 85.6316% ( 171) 00:09:00.460 7461.022 - 7511.434: 86.4401% ( 148) 00:09:00.460 7511.434 - 7561.846: 87.1066% ( 122) 00:09:00.460 7561.846 - 7612.258: 87.6202% ( 94) 00:09:00.460 7612.258 - 7662.671: 88.0190% ( 73) 00:09:00.460 7662.671 - 7713.083: 88.3523% ( 61) 00:09:00.460 7713.083 - 7763.495: 88.6364% ( 52) 00:09:00.460 7763.495 - 7813.908: 88.9532% ( 58) 00:09:00.460 7813.908 - 7864.320: 89.2592% ( 56) 00:09:00.460 7864.320 - 7914.732: 89.5597% ( 55) 00:09:00.460 7914.732 - 7965.145: 89.8492% ( 53) 00:09:00.460 7965.145 - 8015.557: 90.1169% ( 49) 00:09:00.460 8015.557 - 8065.969: 90.3846% ( 49) 00:09:00.460 8065.969 - 8116.382: 90.6359% ( 46) 00:09:00.460 8116.382 - 8166.794: 90.8763% ( 44) 00:09:00.460 8166.794 - 8217.206: 91.1058% ( 42) 00:09:00.460 8217.206 - 8267.618: 91.3134% ( 38) 00:09:00.460 8267.618 - 8318.031: 91.4827% ( 31) 00:09:00.460 8318.031 - 8368.443: 91.6412% ( 29) 00:09:00.460 8368.443 - 8418.855: 91.8105% ( 31) 00:09:00.460 8418.855 - 8469.268: 91.9526% ( 26) 00:09:00.460 8469.268 - 8519.680: 92.1001% ( 27) 00:09:00.460 8519.680 - 8570.092: 92.2367% ( 25) 00:09:00.460 8570.092 - 8620.505: 92.3787% ( 26) 00:09:00.460 8620.505 - 8670.917: 92.4989% ( 22) 00:09:00.460 8670.917 - 8721.329: 92.5972% ( 18) 00:09:00.460 8721.329 - 8771.742: 92.6847% ( 16) 00:09:00.460 8771.742 - 8822.154: 92.7775% ( 17) 00:09:00.461 8822.154 - 8872.566: 92.8486% ( 13) 00:09:00.461 8872.566 - 8922.978: 92.9196% ( 13) 00:09:00.461 8922.978 - 8973.391: 92.9742% ( 10) 00:09:00.461 8973.391 - 9023.803: 93.0343% ( 11) 00:09:00.461 9023.803 - 9074.215: 93.1053% ( 13) 00:09:00.461 9074.215 - 9124.628: 93.1654% ( 11) 00:09:00.461 9124.628 - 9175.040: 93.2201% ( 10) 00:09:00.461 9175.040 - 9225.452: 93.2638% ( 8) 00:09:00.461 9225.452 - 9275.865: 93.3020% ( 7) 00:09:00.461 9275.865 - 9326.277: 93.3512% ( 9) 00:09:00.461 9326.277 - 9376.689: 93.4058% ( 10) 00:09:00.461 9376.689 - 9427.102: 93.4604% ( 10) 00:09:00.461 9427.102 - 9477.514: 93.4987% ( 7) 00:09:00.461 9477.514 - 9527.926: 93.5369% ( 7) 00:09:00.461 9527.926 - 9578.338: 93.5806% ( 8) 00:09:00.461 9578.338 - 9628.751: 93.6080% ( 5) 00:09:00.461 9628.751 - 9679.163: 93.6298% ( 4) 00:09:00.461 9679.163 - 9729.575: 93.6626% ( 6) 00:09:00.461 9729.575 - 9779.988: 93.6899% ( 5) 00:09:00.461 9779.988 - 9830.400: 93.7227% ( 6) 00:09:00.461 9830.400 - 9880.812: 93.7555% ( 6) 00:09:00.461 9880.812 - 9931.225: 93.7828% ( 5) 00:09:00.461 9931.225 - 9981.637: 93.8156% ( 6) 00:09:00.461 9981.637 - 10032.049: 93.8429% ( 5) 00:09:00.461 10032.049 - 10082.462: 93.8757% ( 6) 00:09:00.461 10082.462 - 10132.874: 93.9194% ( 8) 00:09:00.461 10132.874 - 10183.286: 93.9685% ( 9) 00:09:00.461 10183.286 - 10233.698: 94.0177% ( 9) 00:09:00.461 10233.698 - 10284.111: 94.0723% ( 10) 00:09:00.461 10284.111 - 10334.523: 94.1324% ( 11) 00:09:00.461 10334.523 - 10384.935: 94.1980% ( 12) 00:09:00.461 10384.935 - 10435.348: 94.2690% ( 13) 00:09:00.461 10435.348 - 10485.760: 94.3346% ( 12) 00:09:00.461 10485.760 - 10536.172: 94.4056% ( 13) 00:09:00.461 10536.172 - 10586.585: 94.4875% ( 15) 00:09:00.461 10586.585 - 10636.997: 94.5586% ( 13) 00:09:00.461 10636.997 - 10687.409: 94.6405% ( 15) 00:09:00.461 10687.409 - 10737.822: 94.7170% ( 14) 00:09:00.461 10737.822 - 10788.234: 94.8208% ( 19) 00:09:00.461 10788.234 - 10838.646: 94.9082% ( 16) 00:09:00.461 10838.646 - 10889.058: 94.9902% ( 15) 00:09:00.461 10889.058 - 10939.471: 95.0667% ( 14) 00:09:00.461 10939.471 - 10989.883: 95.1650% ( 18) 00:09:00.461 10989.883 - 11040.295: 95.2743% ( 20) 00:09:00.461 11040.295 - 11090.708: 95.3617% ( 16) 00:09:00.461 11090.708 - 11141.120: 95.4764% ( 21) 00:09:00.461 11141.120 - 11191.532: 95.5638% ( 16) 00:09:00.461 11191.532 - 11241.945: 95.6622% ( 18) 00:09:00.461 11241.945 - 11292.357: 95.7441% ( 15) 00:09:00.461 11292.357 - 11342.769: 95.8424% ( 18) 00:09:00.461 11342.769 - 11393.182: 95.9353% ( 17) 00:09:00.461 11393.182 - 11443.594: 96.0282% ( 17) 00:09:00.461 11443.594 - 11494.006: 96.1211% ( 17) 00:09:00.461 11494.006 - 11544.418: 96.2139% ( 17) 00:09:00.461 11544.418 - 11594.831: 96.3014% ( 16) 00:09:00.461 11594.831 - 11645.243: 96.3942% ( 17) 00:09:00.461 11645.243 - 11695.655: 96.4926% ( 18) 00:09:00.461 11695.655 - 11746.068: 96.5909% ( 18) 00:09:00.461 11746.068 - 11796.480: 96.6838% ( 17) 00:09:00.461 11796.480 - 11846.892: 96.7767% ( 17) 00:09:00.461 11846.892 - 11897.305: 96.8695% ( 17) 00:09:00.461 11897.305 - 11947.717: 96.9679% ( 18) 00:09:00.461 11947.717 - 11998.129: 97.0498% ( 15) 00:09:00.461 11998.129 - 12048.542: 97.1372% ( 16) 00:09:00.461 12048.542 - 12098.954: 97.2410% ( 19) 00:09:00.461 12098.954 - 12149.366: 97.3066% ( 12) 00:09:00.461 12149.366 - 12199.778: 97.3612% ( 10) 00:09:00.461 12199.778 - 12250.191: 97.4159% ( 10) 00:09:00.461 12250.191 - 12300.603: 97.4814% ( 12) 00:09:00.461 12300.603 - 12351.015: 97.5361% ( 10) 00:09:00.461 12351.015 - 12401.428: 97.5962% ( 11) 00:09:00.461 12401.428 - 12451.840: 97.6453% ( 9) 00:09:00.461 12451.840 - 12502.252: 97.7054% ( 11) 00:09:00.461 12502.252 - 12552.665: 97.7655% ( 11) 00:09:00.461 12552.665 - 12603.077: 97.8256% ( 11) 00:09:00.461 12603.077 - 12653.489: 97.8584% ( 6) 00:09:00.461 12653.489 - 12703.902: 97.9021% ( 8) 00:09:00.461 12703.902 - 12754.314: 97.9458% ( 8) 00:09:00.461 12754.314 - 12804.726: 97.9950% ( 9) 00:09:00.461 12804.726 - 12855.138: 98.0278% ( 6) 00:09:00.461 12855.138 - 12905.551: 98.0769% ( 9) 00:09:00.461 12905.551 - 13006.375: 98.1753% ( 18) 00:09:00.461 13006.375 - 13107.200: 98.2299% ( 10) 00:09:00.461 13107.200 - 13208.025: 98.2900% ( 11) 00:09:00.461 13208.025 - 13308.849: 98.3556% ( 12) 00:09:00.461 13308.849 - 13409.674: 98.4156% ( 11) 00:09:00.461 13409.674 - 13510.498: 98.4757% ( 11) 00:09:00.461 13510.498 - 13611.323: 98.5304% ( 10) 00:09:00.461 13611.323 - 13712.148: 98.5959% ( 12) 00:09:00.461 13712.148 - 13812.972: 98.6560% ( 11) 00:09:00.461 13812.972 - 13913.797: 98.7107% ( 10) 00:09:00.461 13913.797 - 14014.622: 98.7544% ( 8) 00:09:00.461 14014.622 - 14115.446: 98.7981% ( 8) 00:09:00.461 14115.446 - 14216.271: 98.8309% ( 6) 00:09:00.461 14216.271 - 14317.095: 98.8691% ( 7) 00:09:00.461 14317.095 - 14417.920: 98.9073% ( 7) 00:09:00.461 14417.920 - 14518.745: 98.9456% ( 7) 00:09:00.461 14518.745 - 14619.569: 98.9729% ( 5) 00:09:00.461 14619.569 - 14720.394: 98.9893% ( 3) 00:09:00.461 14720.394 - 14821.218: 99.0111% ( 4) 00:09:00.461 14821.218 - 14922.043: 99.0330% ( 4) 00:09:00.461 14922.043 - 15022.868: 99.0494% ( 3) 00:09:00.461 15022.868 - 15123.692: 99.0712% ( 4) 00:09:00.461 15123.692 - 15224.517: 99.0876% ( 3) 00:09:00.461 15224.517 - 15325.342: 99.1095% ( 4) 00:09:00.461 15325.342 - 15426.166: 99.1259% ( 3) 00:09:00.461 15426.166 - 15526.991: 99.1477% ( 4) 00:09:00.461 15526.991 - 15627.815: 99.1641% ( 3) 00:09:00.461 15627.815 - 15728.640: 99.1860% ( 4) 00:09:00.461 15728.640 - 15829.465: 99.2078% ( 4) 00:09:00.461 15829.465 - 15930.289: 99.2242% ( 3) 00:09:00.461 15930.289 - 16031.114: 99.2461% ( 4) 00:09:00.461 16031.114 - 16131.938: 99.2679% ( 4) 00:09:00.461 16131.938 - 16232.763: 99.2843% ( 3) 00:09:00.461 16232.763 - 16333.588: 99.3007% ( 3) 00:09:00.461 23290.486 - 23391.311: 99.3116% ( 2) 00:09:00.461 23391.311 - 23492.135: 99.3335% ( 4) 00:09:00.461 23492.135 - 23592.960: 99.3553% ( 4) 00:09:00.461 23592.960 - 23693.785: 99.3772% ( 4) 00:09:00.461 23693.785 - 23794.609: 99.3936% ( 3) 00:09:00.461 23794.609 - 23895.434: 99.4154% ( 4) 00:09:00.461 23895.434 - 23996.258: 99.4373% ( 4) 00:09:00.461 23996.258 - 24097.083: 99.4591% ( 4) 00:09:00.461 24097.083 - 24197.908: 99.4755% ( 3) 00:09:00.461 24197.908 - 24298.732: 99.4974% ( 4) 00:09:00.461 24298.732 - 24399.557: 99.5192% ( 4) 00:09:00.461 24399.557 - 24500.382: 99.5411% ( 4) 00:09:00.461 24500.382 - 24601.206: 99.5629% ( 4) 00:09:00.461 24601.206 - 24702.031: 99.5793% ( 3) 00:09:00.461 24702.031 - 24802.855: 99.6012% ( 4) 00:09:00.461 24802.855 - 24903.680: 99.6230% ( 4) 00:09:00.461 24903.680 - 25004.505: 99.6449% ( 4) 00:09:00.461 25004.505 - 25105.329: 99.6667% ( 4) 00:09:00.461 25105.329 - 25206.154: 99.6886% ( 4) 00:09:00.461 25206.154 - 25306.978: 99.7050% ( 3) 00:09:00.461 25306.978 - 25407.803: 99.7214% ( 3) 00:09:00.461 25407.803 - 25508.628: 99.7432% ( 4) 00:09:00.461 25508.628 - 25609.452: 99.7651% ( 4) 00:09:00.461 25609.452 - 25710.277: 99.7869% ( 4) 00:09:00.461 25710.277 - 25811.102: 99.8088% ( 4) 00:09:00.461 25811.102 - 26012.751: 99.8525% ( 8) 00:09:00.461 26012.751 - 26214.400: 99.8907% ( 7) 00:09:00.461 26214.400 - 26416.049: 99.9290% ( 7) 00:09:00.461 26416.049 - 26617.698: 99.9727% ( 8) 00:09:00.461 26617.698 - 26819.348: 100.0000% ( 5) 00:09:00.461 00:09:00.461 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:00.461 ============================================================================== 00:09:00.461 Range in us Cumulative IO count 00:09:00.461 5242.880 - 5268.086: 0.0814% ( 15) 00:09:00.461 5268.086 - 5293.292: 0.1302% ( 9) 00:09:00.461 5293.292 - 5318.498: 0.2550% ( 23) 00:09:00.461 5318.498 - 5343.705: 0.3798% ( 23) 00:09:00.461 5343.705 - 5368.911: 0.5805% ( 37) 00:09:00.461 5368.911 - 5394.117: 0.8138% ( 43) 00:09:00.461 5394.117 - 5419.323: 1.0145% ( 37) 00:09:00.461 5419.323 - 5444.529: 1.2261% ( 39) 00:09:00.461 5444.529 - 5469.735: 1.5734% ( 64) 00:09:00.461 5469.735 - 5494.942: 1.9911% ( 77) 00:09:00.461 5494.942 - 5520.148: 2.4740% ( 89) 00:09:00.461 5520.148 - 5545.354: 3.0816% ( 112) 00:09:00.461 5545.354 - 5570.560: 3.7977% ( 132) 00:09:00.461 5570.560 - 5595.766: 4.6441% ( 156) 00:09:00.461 5595.766 - 5620.972: 5.5447% ( 166) 00:09:00.461 5620.972 - 5646.178: 6.4941% ( 175) 00:09:00.461 5646.178 - 5671.385: 7.5412% ( 193) 00:09:00.461 5671.385 - 5696.591: 8.6100% ( 197) 00:09:00.461 5696.591 - 5721.797: 9.7059% ( 202) 00:09:00.461 5721.797 - 5747.003: 10.7802% ( 198) 00:09:00.461 5747.003 - 5772.209: 11.8924% ( 205) 00:09:00.461 5772.209 - 5797.415: 13.0534% ( 214) 00:09:00.461 5797.415 - 5822.622: 14.2253% ( 216) 00:09:00.461 5822.622 - 5847.828: 15.2995% ( 198) 00:09:00.461 5847.828 - 5873.034: 16.4062% ( 204) 00:09:00.461 5873.034 - 5898.240: 17.6215% ( 224) 00:09:00.461 5898.240 - 5923.446: 18.7337% ( 205) 00:09:00.461 5923.446 - 5948.652: 19.8025% ( 197) 00:09:00.461 5948.652 - 5973.858: 20.9310% ( 208) 00:09:00.461 5973.858 - 5999.065: 22.0595% ( 208) 00:09:00.461 5999.065 - 6024.271: 23.2585% ( 221) 00:09:00.461 6024.271 - 6049.477: 24.4195% ( 214) 00:09:00.461 6049.477 - 6074.683: 25.5914% ( 216) 00:09:00.461 6074.683 - 6099.889: 26.7687% ( 217) 00:09:00.461 6099.889 - 6125.095: 27.9622% ( 220) 00:09:00.461 6125.095 - 6150.302: 29.1612% ( 221) 00:09:00.461 6150.302 - 6175.508: 30.3006% ( 210) 00:09:00.461 6175.508 - 6200.714: 31.5213% ( 225) 00:09:00.462 6200.714 - 6225.920: 32.6823% ( 214) 00:09:00.462 6225.920 - 6251.126: 33.9084% ( 226) 00:09:00.462 6251.126 - 6276.332: 35.0749% ( 215) 00:09:00.462 6276.332 - 6301.538: 36.2956% ( 225) 00:09:00.462 6301.538 - 6326.745: 37.5271% ( 227) 00:09:00.462 6326.745 - 6351.951: 38.7261% ( 221) 00:09:00.462 6351.951 - 6377.157: 39.9143% ( 219) 00:09:00.462 6377.157 - 6402.363: 41.1621% ( 230) 00:09:00.462 6402.363 - 6427.569: 42.3774% ( 224) 00:09:00.462 6427.569 - 6452.775: 43.5981% ( 225) 00:09:00.462 6452.775 - 6503.188: 46.0449% ( 451) 00:09:00.462 6503.188 - 6553.600: 48.5026% ( 453) 00:09:00.462 6553.600 - 6604.012: 51.0037% ( 461) 00:09:00.462 6604.012 - 6654.425: 53.4397% ( 449) 00:09:00.462 6654.425 - 6704.837: 55.9842% ( 469) 00:09:00.462 6704.837 - 6755.249: 58.4690% ( 458) 00:09:00.462 6755.249 - 6805.662: 61.0352% ( 473) 00:09:00.462 6805.662 - 6856.074: 63.5037% ( 455) 00:09:00.462 6856.074 - 6906.486: 65.9234% ( 446) 00:09:00.462 6906.486 - 6956.898: 68.2509% ( 429) 00:09:00.462 6956.898 - 7007.311: 70.5729% ( 428) 00:09:00.462 7007.311 - 7057.723: 72.6780% ( 388) 00:09:00.462 7057.723 - 7108.135: 74.6528% ( 364) 00:09:00.462 7108.135 - 7158.548: 76.6113% ( 361) 00:09:00.462 7158.548 - 7208.960: 78.4831% ( 345) 00:09:00.462 7208.960 - 7259.372: 80.2029% ( 317) 00:09:00.462 7259.372 - 7309.785: 81.7654% ( 288) 00:09:00.462 7309.785 - 7360.197: 83.1380% ( 253) 00:09:00.462 7360.197 - 7410.609: 84.2556% ( 206) 00:09:00.462 7410.609 - 7461.022: 85.1400% ( 163) 00:09:00.462 7461.022 - 7511.434: 85.8290% ( 127) 00:09:00.462 7511.434 - 7561.846: 86.4204% ( 109) 00:09:00.462 7561.846 - 7612.258: 86.8218% ( 74) 00:09:00.462 7612.258 - 7662.671: 87.1745% ( 65) 00:09:00.462 7662.671 - 7713.083: 87.5163% ( 63) 00:09:00.462 7713.083 - 7763.495: 87.8418% ( 60) 00:09:00.462 7763.495 - 7813.908: 88.1836% ( 63) 00:09:00.462 7813.908 - 7864.320: 88.5417% ( 66) 00:09:00.462 7864.320 - 7914.732: 88.8346% ( 54) 00:09:00.462 7914.732 - 7965.145: 89.1439% ( 57) 00:09:00.462 7965.145 - 8015.557: 89.4368% ( 54) 00:09:00.462 8015.557 - 8065.969: 89.7352% ( 55) 00:09:00.462 8065.969 - 8116.382: 90.0065% ( 50) 00:09:00.462 8116.382 - 8166.794: 90.2452% ( 44) 00:09:00.462 8166.794 - 8217.206: 90.4948% ( 46) 00:09:00.462 8217.206 - 8267.618: 90.7118% ( 40) 00:09:00.462 8267.618 - 8318.031: 90.8854% ( 32) 00:09:00.462 8318.031 - 8368.443: 91.0428% ( 29) 00:09:00.462 8368.443 - 8418.855: 91.1947% ( 28) 00:09:00.462 8418.855 - 8469.268: 91.3249% ( 24) 00:09:00.462 8469.268 - 8519.680: 91.4551% ( 24) 00:09:00.462 8519.680 - 8570.092: 91.5582% ( 19) 00:09:00.462 8570.092 - 8620.505: 91.6721% ( 21) 00:09:00.462 8620.505 - 8670.917: 91.7860% ( 21) 00:09:00.462 8670.917 - 8721.329: 91.8945% ( 20) 00:09:00.462 8721.329 - 8771.742: 92.0193% ( 23) 00:09:00.462 8771.742 - 8822.154: 92.1387% ( 22) 00:09:00.462 8822.154 - 8872.566: 92.2418% ( 19) 00:09:00.462 8872.566 - 8922.978: 92.3340% ( 17) 00:09:00.462 8922.978 - 8973.391: 92.4262% ( 17) 00:09:00.462 8973.391 - 9023.803: 92.5401% ( 21) 00:09:00.462 9023.803 - 9074.215: 92.6378% ( 18) 00:09:00.462 9074.215 - 9124.628: 92.7355% ( 18) 00:09:00.462 9124.628 - 9175.040: 92.8277% ( 17) 00:09:00.462 9175.040 - 9225.452: 92.9091% ( 15) 00:09:00.462 9225.452 - 9275.865: 93.0122% ( 19) 00:09:00.462 9275.865 - 9326.277: 93.1315% ( 22) 00:09:00.462 9326.277 - 9376.689: 93.2454% ( 21) 00:09:00.462 9376.689 - 9427.102: 93.3702% ( 23) 00:09:00.462 9427.102 - 9477.514: 93.4625% ( 17) 00:09:00.462 9477.514 - 9527.926: 93.5438% ( 15) 00:09:00.462 9527.926 - 9578.338: 93.6361% ( 17) 00:09:00.462 9578.338 - 9628.751: 93.7174% ( 15) 00:09:00.462 9628.751 - 9679.163: 93.8043% ( 16) 00:09:00.462 9679.163 - 9729.575: 93.8639% ( 11) 00:09:00.462 9729.575 - 9779.988: 93.9399% ( 14) 00:09:00.462 9779.988 - 9830.400: 94.0213% ( 15) 00:09:00.462 9830.400 - 9880.812: 94.1135% ( 17) 00:09:00.462 9880.812 - 9931.225: 94.1895% ( 14) 00:09:00.462 9931.225 - 9981.637: 94.2708% ( 15) 00:09:00.462 9981.637 - 10032.049: 94.3576% ( 16) 00:09:00.462 10032.049 - 10082.462: 94.4336% ( 14) 00:09:00.462 10082.462 - 10132.874: 94.5041% ( 13) 00:09:00.462 10132.874 - 10183.286: 94.5692% ( 12) 00:09:00.462 10183.286 - 10233.698: 94.6398% ( 13) 00:09:00.462 10233.698 - 10284.111: 94.6940% ( 10) 00:09:00.462 10284.111 - 10334.523: 94.7645% ( 13) 00:09:00.462 10334.523 - 10384.935: 94.8296% ( 12) 00:09:00.462 10384.935 - 10435.348: 94.8947% ( 12) 00:09:00.462 10435.348 - 10485.760: 94.9382% ( 8) 00:09:00.462 10485.760 - 10536.172: 94.9924% ( 10) 00:09:00.462 10536.172 - 10586.585: 95.0575% ( 12) 00:09:00.462 10586.585 - 10636.997: 95.1226% ( 12) 00:09:00.462 10636.997 - 10687.409: 95.1823% ( 11) 00:09:00.462 10687.409 - 10737.822: 95.2420% ( 11) 00:09:00.462 10737.822 - 10788.234: 95.3125% ( 13) 00:09:00.462 10788.234 - 10838.646: 95.3885% ( 14) 00:09:00.462 10838.646 - 10889.058: 95.4644% ( 14) 00:09:00.462 10889.058 - 10939.471: 95.5512% ( 16) 00:09:00.462 10939.471 - 10989.883: 95.6326% ( 15) 00:09:00.462 10989.883 - 11040.295: 95.7194% ( 16) 00:09:00.462 11040.295 - 11090.708: 95.8008% ( 15) 00:09:00.462 11090.708 - 11141.120: 95.8984% ( 18) 00:09:00.462 11141.120 - 11191.532: 95.9961% ( 18) 00:09:00.462 11191.532 - 11241.945: 96.0883% ( 17) 00:09:00.462 11241.945 - 11292.357: 96.1968% ( 20) 00:09:00.462 11292.357 - 11342.769: 96.2891% ( 17) 00:09:00.462 11342.769 - 11393.182: 96.3813% ( 17) 00:09:00.462 11393.182 - 11443.594: 96.4627% ( 15) 00:09:00.462 11443.594 - 11494.006: 96.5495% ( 16) 00:09:00.462 11494.006 - 11544.418: 96.6309% ( 15) 00:09:00.462 11544.418 - 11594.831: 96.7122% ( 15) 00:09:00.462 11594.831 - 11645.243: 96.7990% ( 16) 00:09:00.462 11645.243 - 11695.655: 96.8750% ( 14) 00:09:00.462 11695.655 - 11746.068: 96.9672% ( 17) 00:09:00.462 11746.068 - 11796.480: 97.0540% ( 16) 00:09:00.462 11796.480 - 11846.892: 97.1354% ( 15) 00:09:00.462 11846.892 - 11897.305: 97.2168% ( 15) 00:09:00.462 11897.305 - 11947.717: 97.3036% ( 16) 00:09:00.462 11947.717 - 11998.129: 97.3904% ( 16) 00:09:00.462 11998.129 - 12048.542: 97.4772% ( 16) 00:09:00.462 12048.542 - 12098.954: 97.5532% ( 14) 00:09:00.462 12098.954 - 12149.366: 97.6454% ( 17) 00:09:00.462 12149.366 - 12199.778: 97.7214% ( 14) 00:09:00.462 12199.778 - 12250.191: 97.7973% ( 14) 00:09:00.462 12250.191 - 12300.603: 97.8950% ( 18) 00:09:00.462 12300.603 - 12351.015: 97.9709% ( 14) 00:09:00.462 12351.015 - 12401.428: 98.0523% ( 15) 00:09:00.462 12401.428 - 12451.840: 98.1283% ( 14) 00:09:00.462 12451.840 - 12502.252: 98.2042% ( 14) 00:09:00.462 12502.252 - 12552.665: 98.2585% ( 10) 00:09:00.462 12552.665 - 12603.077: 98.3181% ( 11) 00:09:00.462 12603.077 - 12653.489: 98.3615% ( 8) 00:09:00.462 12653.489 - 12703.902: 98.3941% ( 6) 00:09:00.462 12703.902 - 12754.314: 98.4212% ( 5) 00:09:00.462 12754.314 - 12804.726: 98.4538% ( 6) 00:09:00.462 12804.726 - 12855.138: 98.4809% ( 5) 00:09:00.462 12855.138 - 12905.551: 98.5026% ( 4) 00:09:00.462 12905.551 - 13006.375: 98.5569% ( 10) 00:09:00.462 13006.375 - 13107.200: 98.6003% ( 8) 00:09:00.462 13107.200 - 13208.025: 98.6111% ( 2) 00:09:00.462 13510.498 - 13611.323: 98.6274% ( 3) 00:09:00.462 13611.323 - 13712.148: 98.6437% ( 3) 00:09:00.462 13712.148 - 13812.972: 98.6599% ( 3) 00:09:00.462 13812.972 - 13913.797: 98.6816% ( 4) 00:09:00.462 13913.797 - 14014.622: 98.7033% ( 4) 00:09:00.462 14014.622 - 14115.446: 98.7250% ( 4) 00:09:00.462 14115.446 - 14216.271: 98.7413% ( 3) 00:09:00.462 14216.271 - 14317.095: 98.7630% ( 4) 00:09:00.462 14317.095 - 14417.920: 98.7847% ( 4) 00:09:00.462 14417.920 - 14518.745: 98.8010% ( 3) 00:09:00.462 14518.745 - 14619.569: 98.8227% ( 4) 00:09:00.462 14619.569 - 14720.394: 98.8390% ( 3) 00:09:00.462 14720.394 - 14821.218: 98.8607% ( 4) 00:09:00.462 14821.218 - 14922.043: 98.8770% ( 3) 00:09:00.462 14922.043 - 15022.868: 98.9095% ( 6) 00:09:00.462 15022.868 - 15123.692: 98.9421% ( 6) 00:09:00.462 15123.692 - 15224.517: 98.9855% ( 8) 00:09:00.462 15224.517 - 15325.342: 99.0289% ( 8) 00:09:00.462 15325.342 - 15426.166: 99.0668% ( 7) 00:09:00.462 15426.166 - 15526.991: 99.1048% ( 7) 00:09:00.462 15526.991 - 15627.815: 99.1482% ( 8) 00:09:00.462 15627.815 - 15728.640: 99.1862% ( 7) 00:09:00.462 15728.640 - 15829.465: 99.2296% ( 8) 00:09:00.462 15829.465 - 15930.289: 99.2676% ( 7) 00:09:00.462 15930.289 - 16031.114: 99.3056% ( 7) 00:09:00.462 16031.114 - 16131.938: 99.3490% ( 8) 00:09:00.462 16131.938 - 16232.763: 99.3869% ( 7) 00:09:00.462 16232.763 - 16333.588: 99.4303% ( 8) 00:09:00.462 16333.588 - 16434.412: 99.4683% ( 7) 00:09:00.462 16434.412 - 16535.237: 99.5063% ( 7) 00:09:00.462 16535.237 - 16636.062: 99.5443% ( 7) 00:09:00.462 16636.062 - 16736.886: 99.5877% ( 8) 00:09:00.462 16736.886 - 16837.711: 99.6257% ( 7) 00:09:00.462 16837.711 - 16938.535: 99.6691% ( 8) 00:09:00.462 16938.535 - 17039.360: 99.7125% ( 8) 00:09:00.462 17039.360 - 17140.185: 99.7450% ( 6) 00:09:00.462 17140.185 - 17241.009: 99.7667% ( 4) 00:09:00.462 17241.009 - 17341.834: 99.7830% ( 3) 00:09:00.462 17341.834 - 17442.658: 99.8047% ( 4) 00:09:00.462 17442.658 - 17543.483: 99.8264% ( 4) 00:09:00.462 17543.483 - 17644.308: 99.8481% ( 4) 00:09:00.462 17644.308 - 17745.132: 99.8698% ( 4) 00:09:00.462 17745.132 - 17845.957: 99.8915% ( 4) 00:09:00.462 17845.957 - 17946.782: 99.9078% ( 3) 00:09:00.462 17946.782 - 18047.606: 99.9295% ( 4) 00:09:00.462 18047.606 - 18148.431: 99.9512% ( 4) 00:09:00.463 18148.431 - 18249.255: 99.9729% ( 4) 00:09:00.463 18249.255 - 18350.080: 99.9946% ( 4) 00:09:00.463 18350.080 - 18450.905: 100.0000% ( 1) 00:09:00.463 00:09:00.463 10:36:30 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:01.838 Initializing NVMe Controllers 00:09:01.838 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:01.838 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:01.838 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:01.838 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:01.838 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:01.838 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:01.838 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:01.838 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:01.838 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:01.838 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:01.838 Initialization complete. Launching workers. 00:09:01.838 ======================================================== 00:09:01.838 Latency(us) 00:09:01.838 Device Information : IOPS MiB/s Average min max 00:09:01.838 PCIE (0000:00:06.0) NSID 1 from core 0: 17343.86 203.25 7376.18 5017.70 26869.70 00:09:01.838 PCIE (0000:00:07.0) NSID 1 from core 0: 17353.76 203.36 7365.38 5602.94 25088.55 00:09:01.838 PCIE (0000:00:09.0) NSID 1 from core 0: 17470.46 204.73 7309.18 5315.04 16093.55 00:09:01.838 PCIE (0000:00:08.0) NSID 1 from core 0: 17470.46 204.73 7302.20 5408.34 16096.01 00:09:01.838 PCIE (0000:00:08.0) NSID 2 from core 0: 17470.46 204.73 7295.44 5350.84 16213.26 00:09:01.838 PCIE (0000:00:08.0) NSID 3 from core 0: 17470.46 204.73 7288.52 5311.60 15885.89 00:09:01.838 ======================================================== 00:09:01.838 Total : 104579.47 1225.54 7322.70 5017.70 26869.70 00:09:01.838 00:09:01.838 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:01.838 ================================================================================= 00:09:01.838 1.00000% : 5494.942us 00:09:01.838 10.00000% : 6024.271us 00:09:01.838 25.00000% : 6377.157us 00:09:01.838 50.00000% : 6755.249us 00:09:01.838 75.00000% : 7309.785us 00:09:01.838 90.00000% : 9477.514us 00:09:01.838 95.00000% : 12048.542us 00:09:01.838 98.00000% : 13913.797us 00:09:01.838 99.00000% : 14922.043us 00:09:01.838 99.50000% : 25407.803us 00:09:01.838 99.90000% : 26214.400us 00:09:01.838 99.99000% : 26819.348us 00:09:01.838 99.99900% : 27020.997us 00:09:01.838 99.99990% : 27020.997us 00:09:01.838 99.99999% : 27020.997us 00:09:01.838 00:09:01.838 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:01.838 ================================================================================= 00:09:01.838 1.00000% : 5948.652us 00:09:01.838 10.00000% : 6276.332us 00:09:01.838 25.00000% : 6452.775us 00:09:01.838 50.00000% : 6704.837us 00:09:01.838 75.00000% : 7158.548us 00:09:01.838 90.00000% : 9729.575us 00:09:01.838 95.00000% : 12048.542us 00:09:01.838 98.00000% : 13712.148us 00:09:01.838 99.00000% : 14619.569us 00:09:01.838 99.50000% : 23996.258us 00:09:01.838 99.90000% : 24802.855us 00:09:01.838 99.99000% : 25105.329us 00:09:01.838 99.99900% : 25105.329us 00:09:01.838 99.99990% : 25105.329us 00:09:01.838 99.99999% : 25105.329us 00:09:01.838 00:09:01.838 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:01.838 ================================================================================= 00:09:01.838 1.00000% : 5847.828us 00:09:01.838 10.00000% : 6175.508us 00:09:01.838 25.00000% : 6402.363us 00:09:01.838 50.00000% : 6704.837us 00:09:01.838 75.00000% : 7208.960us 00:09:01.838 90.00000% : 9628.751us 00:09:01.838 95.00000% : 12149.366us 00:09:01.838 98.00000% : 14216.271us 00:09:01.838 99.00000% : 15022.868us 00:09:01.838 99.50000% : 15526.991us 00:09:01.838 99.90000% : 15930.289us 00:09:01.838 99.99000% : 16131.938us 00:09:01.838 99.99900% : 16131.938us 00:09:01.838 99.99990% : 16131.938us 00:09:01.838 99.99999% : 16131.938us 00:09:01.838 00:09:01.838 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:01.838 ================================================================================= 00:09:01.838 1.00000% : 5898.240us 00:09:01.838 10.00000% : 6200.714us 00:09:01.838 25.00000% : 6402.363us 00:09:01.838 50.00000% : 6704.837us 00:09:01.838 75.00000% : 7208.960us 00:09:01.838 90.00000% : 8872.566us 00:09:01.838 95.00000% : 12552.665us 00:09:01.838 98.00000% : 14417.920us 00:09:01.838 99.00000% : 15022.868us 00:09:01.838 99.50000% : 15627.815us 00:09:01.838 99.90000% : 15829.465us 00:09:01.838 99.99000% : 16131.938us 00:09:01.838 99.99900% : 16131.938us 00:09:01.838 99.99990% : 16131.938us 00:09:01.838 99.99999% : 16131.938us 00:09:01.838 00:09:01.838 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:01.838 ================================================================================= 00:09:01.838 1.00000% : 5873.034us 00:09:01.838 10.00000% : 6175.508us 00:09:01.838 25.00000% : 6402.363us 00:09:01.838 50.00000% : 6704.837us 00:09:01.838 75.00000% : 7158.548us 00:09:01.838 90.00000% : 8822.154us 00:09:01.838 95.00000% : 12451.840us 00:09:01.838 98.00000% : 14216.271us 00:09:01.838 99.00000% : 15325.342us 00:09:01.838 99.50000% : 15526.991us 00:09:01.838 99.90000% : 15930.289us 00:09:01.838 99.99000% : 16232.763us 00:09:01.838 99.99900% : 16232.763us 00:09:01.838 99.99990% : 16232.763us 00:09:01.838 99.99999% : 16232.763us 00:09:01.838 00:09:01.838 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:01.838 ================================================================================= 00:09:01.838 1.00000% : 5898.240us 00:09:01.838 10.00000% : 6225.920us 00:09:01.838 25.00000% : 6427.569us 00:09:01.838 50.00000% : 6704.837us 00:09:01.838 75.00000% : 7208.960us 00:09:01.838 90.00000% : 9175.040us 00:09:01.838 95.00000% : 12351.015us 00:09:01.838 98.00000% : 13913.797us 00:09:01.838 99.00000% : 14619.569us 00:09:01.838 99.50000% : 15123.692us 00:09:01.838 99.90000% : 15526.991us 00:09:01.838 99.99000% : 15930.289us 00:09:01.838 99.99900% : 15930.289us 00:09:01.838 99.99990% : 15930.289us 00:09:01.838 99.99999% : 15930.289us 00:09:01.838 00:09:01.838 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:01.838 ============================================================================== 00:09:01.838 Range in us Cumulative IO count 00:09:01.838 5016.025 - 5041.231: 0.0114% ( 2) 00:09:01.838 5041.231 - 5066.437: 0.0171% ( 1) 00:09:01.838 5066.437 - 5091.643: 0.0285% ( 2) 00:09:01.838 5091.643 - 5116.849: 0.0342% ( 1) 00:09:01.838 5116.849 - 5142.055: 0.0513% ( 3) 00:09:01.838 5142.055 - 5167.262: 0.0627% ( 2) 00:09:01.838 5167.262 - 5192.468: 0.0912% ( 5) 00:09:01.838 5192.468 - 5217.674: 0.1141% ( 4) 00:09:01.838 5217.674 - 5242.880: 0.1597% ( 8) 00:09:01.838 5242.880 - 5268.086: 0.2110% ( 9) 00:09:01.838 5268.086 - 5293.292: 0.2623% ( 9) 00:09:01.838 5293.292 - 5318.498: 0.3193% ( 10) 00:09:01.838 5318.498 - 5343.705: 0.3764% ( 10) 00:09:01.838 5343.705 - 5368.911: 0.4676% ( 16) 00:09:01.838 5368.911 - 5394.117: 0.5988% ( 23) 00:09:01.838 5394.117 - 5419.323: 0.6900% ( 16) 00:09:01.838 5419.323 - 5444.529: 0.8041% ( 20) 00:09:01.838 5444.529 - 5469.735: 0.9523% ( 26) 00:09:01.838 5469.735 - 5494.942: 1.1405% ( 33) 00:09:01.838 5494.942 - 5520.148: 1.3401% ( 35) 00:09:01.838 5520.148 - 5545.354: 1.5340% ( 34) 00:09:01.838 5545.354 - 5570.560: 1.8020% ( 47) 00:09:01.838 5570.560 - 5595.766: 2.0928% ( 51) 00:09:01.838 5595.766 - 5620.972: 2.3780% ( 50) 00:09:01.838 5620.972 - 5646.178: 2.6631% ( 50) 00:09:01.838 5646.178 - 5671.385: 2.9881% ( 57) 00:09:01.838 5671.385 - 5696.591: 3.3645% ( 66) 00:09:01.838 5696.591 - 5721.797: 3.6610% ( 52) 00:09:01.838 5721.797 - 5747.003: 3.9633% ( 53) 00:09:01.838 5747.003 - 5772.209: 4.3054% ( 60) 00:09:01.838 5772.209 - 5797.415: 4.7160% ( 72) 00:09:01.838 5797.415 - 5822.622: 5.0297% ( 55) 00:09:01.838 5822.622 - 5847.828: 5.3775% ( 61) 00:09:01.838 5847.828 - 5873.034: 5.7539% ( 66) 00:09:01.838 5873.034 - 5898.240: 6.2557% ( 88) 00:09:01.838 5898.240 - 5923.446: 6.8545% ( 105) 00:09:01.838 5923.446 - 5948.652: 7.4646% ( 107) 00:09:01.838 5948.652 - 5973.858: 8.0748% ( 107) 00:09:01.838 5973.858 - 5999.065: 8.9530% ( 154) 00:09:01.838 5999.065 - 6024.271: 10.0080% ( 185) 00:09:01.838 6024.271 - 6049.477: 11.0972% ( 191) 00:09:01.838 6049.477 - 6074.683: 12.3175% ( 214) 00:09:01.838 6074.683 - 6099.889: 13.2356% ( 161) 00:09:01.838 6099.889 - 6125.095: 14.1423% ( 159) 00:09:01.838 6125.095 - 6150.302: 15.3000% ( 203) 00:09:01.838 6150.302 - 6175.508: 16.2295% ( 163) 00:09:01.838 6175.508 - 6200.714: 17.2502% ( 179) 00:09:01.838 6200.714 - 6225.920: 18.3793% ( 198) 00:09:01.838 6225.920 - 6251.126: 19.5826% ( 211) 00:09:01.838 6251.126 - 6276.332: 20.7744% ( 209) 00:09:01.838 6276.332 - 6301.538: 22.0632% ( 226) 00:09:01.838 6301.538 - 6326.745: 23.4261% ( 239) 00:09:01.838 6326.745 - 6351.951: 24.7890% ( 239) 00:09:01.839 6351.951 - 6377.157: 26.1576% ( 240) 00:09:01.839 6377.157 - 6402.363: 27.6517% ( 262) 00:09:01.839 6402.363 - 6427.569: 29.2541% ( 281) 00:09:01.839 6427.569 - 6452.775: 30.8394% ( 278) 00:09:01.839 6452.775 - 6503.188: 34.0043% ( 555) 00:09:01.839 6503.188 - 6553.600: 37.4886% ( 611) 00:09:01.839 6553.600 - 6604.012: 40.7276% ( 568) 00:09:01.839 6604.012 - 6654.425: 44.3089% ( 628) 00:09:01.839 6654.425 - 6704.837: 47.6562% ( 587) 00:09:01.839 6704.837 - 6755.249: 50.4676% ( 493) 00:09:01.839 6755.249 - 6805.662: 53.2219% ( 483) 00:09:01.839 6805.662 - 6856.074: 56.1359% ( 511) 00:09:01.839 6856.074 - 6906.486: 58.8104% ( 469) 00:09:01.839 6906.486 - 6956.898: 61.5705% ( 484) 00:09:01.839 6956.898 - 7007.311: 64.0967% ( 443) 00:09:01.839 7007.311 - 7057.723: 66.4062% ( 405) 00:09:01.839 7057.723 - 7108.135: 68.7842% ( 417) 00:09:01.839 7108.135 - 7158.548: 71.0253% ( 393) 00:09:01.839 7158.548 - 7208.960: 73.0782% ( 360) 00:09:01.839 7208.960 - 7259.372: 74.3898% ( 230) 00:09:01.839 7259.372 - 7309.785: 75.6387% ( 219) 00:09:01.839 7309.785 - 7360.197: 76.8248% ( 208) 00:09:01.839 7360.197 - 7410.609: 78.0338% ( 212) 00:09:01.839 7410.609 - 7461.022: 78.8435% ( 142) 00:09:01.839 7461.022 - 7511.434: 79.5849% ( 130) 00:09:01.839 7511.434 - 7561.846: 80.3889% ( 141) 00:09:01.839 7561.846 - 7612.258: 81.3070% ( 161) 00:09:01.839 7612.258 - 7662.671: 81.9970% ( 121) 00:09:01.839 7662.671 - 7713.083: 82.7213% ( 127) 00:09:01.839 7713.083 - 7763.495: 83.3428% ( 109) 00:09:01.839 7763.495 - 7813.908: 83.8276% ( 85) 00:09:01.839 7813.908 - 7864.320: 84.2667% ( 77) 00:09:01.839 7864.320 - 7914.732: 84.6601% ( 69) 00:09:01.839 7914.732 - 7965.145: 85.0365% ( 66) 00:09:01.839 7965.145 - 8015.557: 85.3444% ( 54) 00:09:01.839 8015.557 - 8065.969: 85.6296% ( 50) 00:09:01.839 8065.969 - 8116.382: 85.8634% ( 41) 00:09:01.839 8116.382 - 8166.794: 86.0287% ( 29) 00:09:01.839 8166.794 - 8217.206: 86.2682% ( 42) 00:09:01.839 8217.206 - 8267.618: 86.6275% ( 63) 00:09:01.839 8267.618 - 8318.031: 87.1065% ( 84) 00:09:01.839 8318.031 - 8368.443: 87.3745% ( 47) 00:09:01.839 8368.443 - 8418.855: 87.6026% ( 40) 00:09:01.839 8418.855 - 8469.268: 87.7737% ( 30) 00:09:01.839 8469.268 - 8519.680: 87.9049% ( 23) 00:09:01.839 8519.680 - 8570.092: 87.9904% ( 15) 00:09:01.839 8570.092 - 8620.505: 88.1501% ( 28) 00:09:01.839 8620.505 - 8670.917: 88.2870% ( 24) 00:09:01.839 8670.917 - 8721.329: 88.4067% ( 21) 00:09:01.839 8721.329 - 8771.742: 88.5208% ( 20) 00:09:01.839 8771.742 - 8822.154: 88.6120% ( 16) 00:09:01.839 8822.154 - 8872.566: 88.6861% ( 13) 00:09:01.839 8872.566 - 8922.978: 88.8002% ( 20) 00:09:01.839 8922.978 - 8973.391: 88.9256% ( 22) 00:09:01.839 8973.391 - 9023.803: 89.0511% ( 22) 00:09:01.839 9023.803 - 9074.215: 89.1937% ( 25) 00:09:01.839 9074.215 - 9124.628: 89.3134% ( 21) 00:09:01.839 9124.628 - 9175.040: 89.4446% ( 23) 00:09:01.839 9175.040 - 9225.452: 89.5301% ( 15) 00:09:01.839 9225.452 - 9275.865: 89.6385% ( 19) 00:09:01.839 9275.865 - 9326.277: 89.7354% ( 17) 00:09:01.839 9326.277 - 9376.689: 89.8038% ( 12) 00:09:01.839 9376.689 - 9427.102: 89.9065% ( 18) 00:09:01.839 9427.102 - 9477.514: 90.0091% ( 18) 00:09:01.839 9477.514 - 9527.926: 90.0776% ( 12) 00:09:01.839 9527.926 - 9578.338: 90.1916% ( 20) 00:09:01.839 9578.338 - 9628.751: 90.2657% ( 13) 00:09:01.839 9628.751 - 9679.163: 90.3342% ( 12) 00:09:01.839 9679.163 - 9729.575: 90.4083% ( 13) 00:09:01.839 9729.575 - 9779.988: 90.4824% ( 13) 00:09:01.839 9779.988 - 9830.400: 90.5680% ( 15) 00:09:01.839 9830.400 - 9880.812: 90.6307% ( 11) 00:09:01.839 9880.812 - 9931.225: 90.6877% ( 10) 00:09:01.839 9931.225 - 9981.637: 90.7676% ( 14) 00:09:01.839 9981.637 - 10032.049: 90.8531% ( 15) 00:09:01.839 10032.049 - 10082.462: 91.0128% ( 28) 00:09:01.839 10082.462 - 10132.874: 91.1211% ( 19) 00:09:01.839 10132.874 - 10183.286: 91.2352% ( 20) 00:09:01.839 10183.286 - 10233.698: 91.3321% ( 17) 00:09:01.839 10233.698 - 10284.111: 91.4234% ( 16) 00:09:01.839 10284.111 - 10334.523: 91.5146% ( 16) 00:09:01.839 10334.523 - 10384.935: 91.5944% ( 14) 00:09:01.839 10384.935 - 10435.348: 91.6743% ( 14) 00:09:01.839 10435.348 - 10485.760: 91.7712% ( 17) 00:09:01.839 10485.760 - 10536.172: 91.8396% ( 12) 00:09:01.839 10536.172 - 10586.585: 91.9309% ( 16) 00:09:01.839 10586.585 - 10636.997: 92.0107% ( 14) 00:09:01.839 10636.997 - 10687.409: 92.0677% ( 10) 00:09:01.839 10687.409 - 10737.822: 92.1761% ( 19) 00:09:01.839 10737.822 - 10788.234: 92.2673% ( 16) 00:09:01.839 10788.234 - 10838.646: 92.3529% ( 15) 00:09:01.839 10838.646 - 10889.058: 92.4498% ( 17) 00:09:01.839 10889.058 - 10939.471: 92.5525% ( 18) 00:09:01.839 10939.471 - 10989.883: 92.7064% ( 27) 00:09:01.839 10989.883 - 11040.295: 92.8775% ( 30) 00:09:01.839 11040.295 - 11090.708: 93.0258% ( 26) 00:09:01.839 11090.708 - 11141.120: 93.1398% ( 20) 00:09:01.839 11141.120 - 11191.532: 93.2710% ( 23) 00:09:01.839 11191.532 - 11241.945: 93.4021% ( 23) 00:09:01.839 11241.945 - 11292.357: 93.5105% ( 19) 00:09:01.839 11292.357 - 11342.769: 93.6188% ( 19) 00:09:01.839 11342.769 - 11393.182: 93.7101% ( 16) 00:09:01.839 11393.182 - 11443.594: 93.8241% ( 20) 00:09:01.839 11443.594 - 11494.006: 93.9154% ( 16) 00:09:01.839 11494.006 - 11544.418: 93.9952% ( 14) 00:09:01.839 11544.418 - 11594.831: 94.1321% ( 24) 00:09:01.839 11594.831 - 11645.243: 94.2176% ( 15) 00:09:01.839 11645.243 - 11695.655: 94.3260% ( 19) 00:09:01.839 11695.655 - 11746.068: 94.4343% ( 19) 00:09:01.839 11746.068 - 11796.480: 94.5198% ( 15) 00:09:01.839 11796.480 - 11846.892: 94.6111% ( 16) 00:09:01.839 11846.892 - 11897.305: 94.7137% ( 18) 00:09:01.839 11897.305 - 11947.717: 94.8506% ( 24) 00:09:01.839 11947.717 - 11998.129: 94.9589% ( 19) 00:09:01.839 11998.129 - 12048.542: 95.0844% ( 22) 00:09:01.839 12048.542 - 12098.954: 95.1699% ( 15) 00:09:01.839 12098.954 - 12149.366: 95.2384% ( 12) 00:09:01.839 12149.366 - 12199.778: 95.3011% ( 11) 00:09:01.839 12199.778 - 12250.191: 95.3353% ( 6) 00:09:01.839 12250.191 - 12300.603: 95.3809% ( 8) 00:09:01.839 12300.603 - 12351.015: 95.4551% ( 13) 00:09:01.839 12351.015 - 12401.428: 95.5178% ( 11) 00:09:01.839 12401.428 - 12451.840: 95.5691% ( 9) 00:09:01.839 12451.840 - 12502.252: 95.6204% ( 9) 00:09:01.839 12502.252 - 12552.665: 95.7117% ( 16) 00:09:01.839 12552.665 - 12603.077: 95.8086% ( 17) 00:09:01.839 12603.077 - 12653.489: 95.8828% ( 13) 00:09:01.839 12653.489 - 12703.902: 95.9740% ( 16) 00:09:01.839 12703.902 - 12754.314: 96.0424% ( 12) 00:09:01.839 12754.314 - 12804.726: 96.1166% ( 13) 00:09:01.839 12804.726 - 12855.138: 96.1907% ( 13) 00:09:01.839 12855.138 - 12905.551: 96.2762% ( 15) 00:09:01.839 12905.551 - 13006.375: 96.4587% ( 32) 00:09:01.839 13006.375 - 13107.200: 96.6982% ( 42) 00:09:01.839 13107.200 - 13208.025: 96.9263% ( 40) 00:09:01.839 13208.025 - 13308.849: 97.1202% ( 34) 00:09:01.839 13308.849 - 13409.674: 97.3084% ( 33) 00:09:01.839 13409.674 - 13510.498: 97.5194% ( 37) 00:09:01.839 13510.498 - 13611.323: 97.6848% ( 29) 00:09:01.839 13611.323 - 13712.148: 97.8558% ( 30) 00:09:01.839 13712.148 - 13812.972: 97.9756% ( 21) 00:09:01.839 13812.972 - 13913.797: 98.0896% ( 20) 00:09:01.839 13913.797 - 14014.622: 98.1638% ( 13) 00:09:01.839 14014.622 - 14115.446: 98.2607% ( 17) 00:09:01.839 14115.446 - 14216.271: 98.3634% ( 18) 00:09:01.839 14216.271 - 14317.095: 98.4546% ( 16) 00:09:01.839 14317.095 - 14417.920: 98.5630% ( 19) 00:09:01.839 14417.920 - 14518.745: 98.6542% ( 16) 00:09:01.839 14518.745 - 14619.569: 98.7454% ( 16) 00:09:01.839 14619.569 - 14720.394: 98.8367% ( 16) 00:09:01.839 14720.394 - 14821.218: 98.9279% ( 16) 00:09:01.839 14821.218 - 14922.043: 99.0135% ( 15) 00:09:01.839 14922.043 - 15022.868: 99.0705% ( 10) 00:09:01.839 15022.868 - 15123.692: 99.1104% ( 7) 00:09:01.839 15123.692 - 15224.517: 99.1560% ( 8) 00:09:01.839 15224.517 - 15325.342: 99.2130% ( 10) 00:09:01.839 15325.342 - 15426.166: 99.2359% ( 4) 00:09:01.839 15426.166 - 15526.991: 99.2587% ( 4) 00:09:01.839 15526.991 - 15627.815: 99.2644% ( 1) 00:09:01.839 15627.815 - 15728.640: 99.2701% ( 1) 00:09:01.839 24802.855 - 24903.680: 99.2986% ( 5) 00:09:01.839 24903.680 - 25004.505: 99.3271% ( 5) 00:09:01.839 25004.505 - 25105.329: 99.3613% ( 6) 00:09:01.839 25105.329 - 25206.154: 99.4183% ( 10) 00:09:01.839 25206.154 - 25306.978: 99.4697% ( 9) 00:09:01.839 25306.978 - 25407.803: 99.5438% ( 13) 00:09:01.839 25407.803 - 25508.628: 99.5609% ( 3) 00:09:01.839 25508.628 - 25609.452: 99.5837% ( 4) 00:09:01.839 25609.452 - 25710.277: 99.6350% ( 9) 00:09:01.839 25710.277 - 25811.102: 99.6921% ( 10) 00:09:01.839 25811.102 - 26012.751: 99.8232% ( 23) 00:09:01.839 26012.751 - 26214.400: 99.9088% ( 15) 00:09:01.839 26214.400 - 26416.049: 99.9715% ( 11) 00:09:01.839 26416.049 - 26617.698: 99.9829% ( 2) 00:09:01.839 26617.698 - 26819.348: 99.9943% ( 2) 00:09:01.839 26819.348 - 27020.997: 100.0000% ( 1) 00:09:01.839 00:09:01.839 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:01.839 ============================================================================== 00:09:01.839 Range in us Cumulative IO count 00:09:01.839 5595.766 - 5620.972: 0.0114% ( 2) 00:09:01.839 5620.972 - 5646.178: 0.0171% ( 1) 00:09:01.839 5646.178 - 5671.385: 0.0456% ( 5) 00:09:01.839 5671.385 - 5696.591: 0.0684% ( 4) 00:09:01.839 5696.591 - 5721.797: 0.0969% ( 5) 00:09:01.839 5721.797 - 5747.003: 0.1368% ( 7) 00:09:01.839 5747.003 - 5772.209: 0.2052% ( 12) 00:09:01.839 5772.209 - 5797.415: 0.2736% ( 12) 00:09:01.839 5797.415 - 5822.622: 0.3306% ( 10) 00:09:01.840 5822.622 - 5847.828: 0.4160% ( 15) 00:09:01.840 5847.828 - 5873.034: 0.5300% ( 20) 00:09:01.840 5873.034 - 5898.240: 0.6611% ( 23) 00:09:01.840 5898.240 - 5923.446: 0.8606% ( 35) 00:09:01.840 5923.446 - 5948.652: 1.0373% ( 31) 00:09:01.840 5948.652 - 5973.858: 1.2538% ( 38) 00:09:01.840 5973.858 - 5999.065: 1.5388% ( 50) 00:09:01.840 5999.065 - 6024.271: 1.9948% ( 80) 00:09:01.840 6024.271 - 6049.477: 2.3082% ( 55) 00:09:01.840 6049.477 - 6074.683: 2.7927% ( 85) 00:09:01.840 6074.683 - 6099.889: 3.4652% ( 118) 00:09:01.840 6099.889 - 6125.095: 4.2004% ( 129) 00:09:01.840 6125.095 - 6150.302: 5.0553% ( 150) 00:09:01.840 6150.302 - 6175.508: 6.3946% ( 235) 00:09:01.840 6175.508 - 6200.714: 7.4148% ( 179) 00:09:01.840 6200.714 - 6225.920: 8.5604% ( 201) 00:09:01.840 6225.920 - 6251.126: 9.8484% ( 226) 00:09:01.840 6251.126 - 6276.332: 11.3644% ( 266) 00:09:01.840 6276.332 - 6301.538: 13.8493% ( 436) 00:09:01.840 6301.538 - 6326.745: 15.4223% ( 276) 00:09:01.840 6326.745 - 6351.951: 16.8642% ( 253) 00:09:01.840 6351.951 - 6377.157: 19.4061% ( 446) 00:09:01.840 6377.157 - 6402.363: 22.1817% ( 487) 00:09:01.840 6402.363 - 6427.569: 24.3588% ( 382) 00:09:01.840 6427.569 - 6452.775: 26.1883% ( 321) 00:09:01.840 6452.775 - 6503.188: 30.0581% ( 679) 00:09:01.840 6503.188 - 6553.600: 35.0564% ( 877) 00:09:01.840 6553.600 - 6604.012: 40.3397% ( 927) 00:09:01.840 6604.012 - 6654.425: 46.6431% ( 1106) 00:09:01.840 6654.425 - 6704.837: 50.3762% ( 655) 00:09:01.840 6704.837 - 6755.249: 54.1035% ( 654) 00:09:01.840 6755.249 - 6805.662: 58.4179% ( 757) 00:09:01.840 6805.662 - 6856.074: 62.8519% ( 778) 00:09:01.840 6856.074 - 6906.486: 65.1829% ( 409) 00:09:01.840 6906.486 - 6956.898: 68.5968% ( 599) 00:09:01.840 6956.898 - 7007.311: 70.8082% ( 388) 00:09:01.840 7007.311 - 7057.723: 72.7174% ( 335) 00:09:01.840 7057.723 - 7108.135: 74.5526% ( 322) 00:09:01.840 7108.135 - 7158.548: 76.2054% ( 290) 00:09:01.840 7158.548 - 7208.960: 78.1888% ( 348) 00:09:01.840 7208.960 - 7259.372: 79.1063% ( 161) 00:09:01.840 7259.372 - 7309.785: 80.1607% ( 185) 00:09:01.840 7309.785 - 7360.197: 81.1809% ( 179) 00:09:01.840 7360.197 - 7410.609: 81.9047% ( 127) 00:09:01.840 7410.609 - 7461.022: 82.4233% ( 91) 00:09:01.840 7461.022 - 7511.434: 82.8508% ( 75) 00:09:01.840 7511.434 - 7561.846: 83.1415% ( 51) 00:09:01.840 7561.846 - 7612.258: 84.0078% ( 152) 00:09:01.840 7612.258 - 7662.671: 84.1844% ( 31) 00:09:01.840 7662.671 - 7713.083: 84.3098% ( 22) 00:09:01.840 7713.083 - 7763.495: 84.3782% ( 12) 00:09:01.840 7763.495 - 7813.908: 84.4694% ( 16) 00:09:01.840 7813.908 - 7864.320: 84.6176% ( 26) 00:09:01.840 7864.320 - 7914.732: 85.2502% ( 111) 00:09:01.840 7914.732 - 7965.145: 85.3927% ( 25) 00:09:01.840 7965.145 - 8015.557: 85.4896% ( 17) 00:09:01.840 8015.557 - 8065.969: 85.5523% ( 11) 00:09:01.840 8065.969 - 8116.382: 85.6207% ( 12) 00:09:01.840 8116.382 - 8166.794: 85.8144% ( 34) 00:09:01.840 8166.794 - 8217.206: 86.1792% ( 64) 00:09:01.840 8217.206 - 8267.618: 86.2875% ( 19) 00:09:01.840 8267.618 - 8318.031: 86.3730% ( 15) 00:09:01.840 8318.031 - 8368.443: 86.4869% ( 20) 00:09:01.840 8368.443 - 8418.855: 86.5781% ( 16) 00:09:01.840 8418.855 - 8469.268: 86.6636% ( 15) 00:09:01.840 8469.268 - 8519.680: 86.7833% ( 21) 00:09:01.840 8519.680 - 8570.092: 87.0113% ( 40) 00:09:01.840 8570.092 - 8620.505: 87.0911% ( 14) 00:09:01.840 8620.505 - 8670.917: 87.1766% ( 15) 00:09:01.840 8670.917 - 8721.329: 87.2507% ( 13) 00:09:01.840 8721.329 - 8771.742: 87.3760% ( 22) 00:09:01.840 8771.742 - 8822.154: 87.7921% ( 73) 00:09:01.840 8822.154 - 8872.566: 87.9403% ( 26) 00:09:01.840 8872.566 - 8922.978: 88.0543% ( 20) 00:09:01.840 8922.978 - 8973.391: 88.1511% ( 17) 00:09:01.840 8973.391 - 9023.803: 88.2651% ( 20) 00:09:01.840 9023.803 - 9074.215: 88.3848% ( 21) 00:09:01.840 9074.215 - 9124.628: 88.4931% ( 19) 00:09:01.840 9124.628 - 9175.040: 88.6014% ( 19) 00:09:01.840 9175.040 - 9225.452: 88.6812% ( 14) 00:09:01.840 9225.452 - 9275.865: 88.7496% ( 12) 00:09:01.840 9275.865 - 9326.277: 88.8579% ( 19) 00:09:01.840 9326.277 - 9376.689: 88.9433% ( 15) 00:09:01.840 9376.689 - 9427.102: 89.0117% ( 12) 00:09:01.840 9427.102 - 9477.514: 89.1314% ( 21) 00:09:01.840 9477.514 - 9527.926: 89.2910% ( 28) 00:09:01.840 9527.926 - 9578.338: 89.4335% ( 25) 00:09:01.840 9578.338 - 9628.751: 89.6729% ( 42) 00:09:01.840 9628.751 - 9679.163: 89.8609% ( 33) 00:09:01.840 9679.163 - 9729.575: 90.1288% ( 47) 00:09:01.840 9729.575 - 9779.988: 90.4993% ( 65) 00:09:01.840 9779.988 - 9830.400: 90.7557% ( 45) 00:09:01.840 9830.400 - 9880.812: 91.0179% ( 46) 00:09:01.840 9880.812 - 9931.225: 91.2630% ( 43) 00:09:01.840 9931.225 - 9981.637: 91.4909% ( 40) 00:09:01.840 9981.637 - 10032.049: 91.7417% ( 44) 00:09:01.840 10032.049 - 10082.462: 91.9754% ( 41) 00:09:01.840 10082.462 - 10132.874: 92.1236% ( 26) 00:09:01.840 10132.874 - 10183.286: 92.2546% ( 23) 00:09:01.840 10183.286 - 10233.698: 92.3743% ( 21) 00:09:01.840 10233.698 - 10284.111: 92.5054% ( 23) 00:09:01.840 10284.111 - 10334.523: 92.6308% ( 22) 00:09:01.840 10334.523 - 10384.935: 92.7505% ( 21) 00:09:01.840 10384.935 - 10435.348: 92.8417% ( 16) 00:09:01.840 10435.348 - 10485.760: 92.9443% ( 18) 00:09:01.840 10485.760 - 10536.172: 93.0525% ( 19) 00:09:01.840 10536.172 - 10586.585: 93.3147% ( 46) 00:09:01.840 10586.585 - 10636.997: 93.4116% ( 17) 00:09:01.840 10636.997 - 10687.409: 93.5085% ( 17) 00:09:01.840 10687.409 - 10737.822: 93.5598% ( 9) 00:09:01.840 10737.822 - 10788.234: 93.5997% ( 7) 00:09:01.840 10788.234 - 10838.646: 93.6567% ( 10) 00:09:01.840 10838.646 - 10889.058: 93.7023% ( 8) 00:09:01.840 10889.058 - 10939.471: 93.7536% ( 9) 00:09:01.840 10939.471 - 10989.883: 93.7992% ( 8) 00:09:01.840 10989.883 - 11040.295: 93.8505% ( 9) 00:09:01.840 11040.295 - 11090.708: 93.9017% ( 9) 00:09:01.840 11090.708 - 11141.120: 93.9473% ( 8) 00:09:01.840 11141.120 - 11191.532: 93.9986% ( 9) 00:09:01.840 11191.532 - 11241.945: 94.0385% ( 7) 00:09:01.840 11241.945 - 11292.357: 94.0841% ( 8) 00:09:01.840 11292.357 - 11342.769: 94.1297% ( 8) 00:09:01.840 11342.769 - 11393.182: 94.1753% ( 8) 00:09:01.840 11393.182 - 11443.594: 94.1981% ( 4) 00:09:01.840 11443.594 - 11494.006: 94.2380% ( 7) 00:09:01.840 11494.006 - 11544.418: 94.3007% ( 11) 00:09:01.840 11544.418 - 11594.831: 94.3805% ( 14) 00:09:01.840 11594.831 - 11645.243: 94.4717% ( 16) 00:09:01.840 11645.243 - 11695.655: 94.5515% ( 14) 00:09:01.840 11695.655 - 11746.068: 94.6142% ( 11) 00:09:01.840 11746.068 - 11796.480: 94.6768% ( 11) 00:09:01.840 11796.480 - 11846.892: 94.7566% ( 14) 00:09:01.840 11846.892 - 11897.305: 94.8136% ( 10) 00:09:01.840 11897.305 - 11947.717: 94.9219% ( 19) 00:09:01.840 11947.717 - 11998.129: 94.9903% ( 12) 00:09:01.840 11998.129 - 12048.542: 95.0872% ( 17) 00:09:01.840 12048.542 - 12098.954: 95.2411% ( 27) 00:09:01.840 12098.954 - 12149.366: 95.3665% ( 22) 00:09:01.840 12149.366 - 12199.778: 95.4634% ( 17) 00:09:01.840 12199.778 - 12250.191: 95.5716% ( 19) 00:09:01.840 12250.191 - 12300.603: 95.6742% ( 18) 00:09:01.840 12300.603 - 12351.015: 95.7711% ( 17) 00:09:01.840 12351.015 - 12401.428: 95.8281% ( 10) 00:09:01.840 12401.428 - 12451.840: 95.8965% ( 12) 00:09:01.840 12451.840 - 12502.252: 95.9592% ( 11) 00:09:01.840 12502.252 - 12552.665: 96.0390% ( 14) 00:09:01.840 12552.665 - 12603.077: 96.0960% ( 10) 00:09:01.840 12603.077 - 12653.489: 96.1587% ( 11) 00:09:01.840 12653.489 - 12703.902: 96.2271% ( 12) 00:09:01.840 12703.902 - 12754.314: 96.2898% ( 11) 00:09:01.840 12754.314 - 12804.726: 96.3581% ( 12) 00:09:01.840 12804.726 - 12855.138: 96.4265% ( 12) 00:09:01.840 12855.138 - 12905.551: 96.4949% ( 12) 00:09:01.840 12905.551 - 13006.375: 96.6431% ( 26) 00:09:01.840 13006.375 - 13107.200: 96.8141% ( 30) 00:09:01.840 13107.200 - 13208.025: 97.0136% ( 35) 00:09:01.840 13208.025 - 13308.849: 97.2073% ( 34) 00:09:01.840 13308.849 - 13409.674: 97.4638% ( 45) 00:09:01.840 13409.674 - 13510.498: 97.6690% ( 36) 00:09:01.840 13510.498 - 13611.323: 97.8628% ( 34) 00:09:01.840 13611.323 - 13712.148: 98.0508% ( 33) 00:09:01.840 13712.148 - 13812.972: 98.1705% ( 21) 00:09:01.840 13812.972 - 13913.797: 98.3301% ( 28) 00:09:01.840 13913.797 - 14014.622: 98.4498% ( 21) 00:09:01.840 14014.622 - 14115.446: 98.5467% ( 17) 00:09:01.840 14115.446 - 14216.271: 98.6664% ( 21) 00:09:01.840 14216.271 - 14317.095: 98.7803% ( 20) 00:09:01.840 14317.095 - 14417.920: 98.8829% ( 18) 00:09:01.840 14417.920 - 14518.745: 98.9513% ( 12) 00:09:01.840 14518.745 - 14619.569: 99.0197% ( 12) 00:09:01.840 14619.569 - 14720.394: 99.0881% ( 12) 00:09:01.840 14720.394 - 14821.218: 99.1565% ( 12) 00:09:01.840 14821.218 - 14922.043: 99.2192% ( 11) 00:09:01.840 14922.043 - 15022.868: 99.2591% ( 7) 00:09:01.840 15022.868 - 15123.692: 99.2705% ( 2) 00:09:01.840 17845.957 - 17946.782: 99.2762% ( 1) 00:09:01.840 17946.782 - 18047.606: 99.3275% ( 9) 00:09:01.840 23492.135 - 23592.960: 99.3332% ( 1) 00:09:01.840 23592.960 - 23693.785: 99.3788% ( 8) 00:09:01.840 23693.785 - 23794.609: 99.4415% ( 11) 00:09:01.840 23794.609 - 23895.434: 99.4871% ( 8) 00:09:01.840 23895.434 - 23996.258: 99.5384% ( 9) 00:09:01.840 23996.258 - 24097.083: 99.5840% ( 8) 00:09:01.840 24097.083 - 24197.908: 99.6295% ( 8) 00:09:01.840 24197.908 - 24298.732: 99.6751% ( 8) 00:09:01.840 24298.732 - 24399.557: 99.7264% ( 9) 00:09:01.840 24399.557 - 24500.382: 99.7663% ( 7) 00:09:01.840 24500.382 - 24601.206: 99.8119% ( 8) 00:09:01.840 24601.206 - 24702.031: 99.8860% ( 13) 00:09:01.841 24702.031 - 24802.855: 99.9373% ( 9) 00:09:01.841 24802.855 - 24903.680: 99.9601% ( 4) 00:09:01.841 24903.680 - 25004.505: 99.9772% ( 3) 00:09:01.841 25004.505 - 25105.329: 100.0000% ( 4) 00:09:01.841 00:09:01.841 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:01.841 ============================================================================== 00:09:01.841 Range in us Cumulative IO count 00:09:01.841 5293.292 - 5318.498: 0.0057% ( 1) 00:09:01.841 5394.117 - 5419.323: 0.0170% ( 2) 00:09:01.841 5419.323 - 5444.529: 0.0226% ( 1) 00:09:01.841 5595.766 - 5620.972: 0.0340% ( 2) 00:09:01.841 5620.972 - 5646.178: 0.0453% ( 2) 00:09:01.841 5646.178 - 5671.385: 0.0906% ( 8) 00:09:01.841 5671.385 - 5696.591: 0.1585% ( 12) 00:09:01.841 5696.591 - 5721.797: 0.2717% ( 20) 00:09:01.841 5721.797 - 5747.003: 0.4076% ( 24) 00:09:01.841 5747.003 - 5772.209: 0.5661% ( 28) 00:09:01.841 5772.209 - 5797.415: 0.7643% ( 35) 00:09:01.841 5797.415 - 5822.622: 0.9171% ( 27) 00:09:01.841 5822.622 - 5847.828: 1.1266% ( 37) 00:09:01.841 5847.828 - 5873.034: 1.3587% ( 41) 00:09:01.841 5873.034 - 5898.240: 1.6531% ( 52) 00:09:01.841 5898.240 - 5923.446: 1.9418% ( 51) 00:09:01.841 5923.446 - 5948.652: 2.3607% ( 74) 00:09:01.841 5948.652 - 5973.858: 2.7570% ( 70) 00:09:01.841 5973.858 - 5999.065: 3.2269% ( 83) 00:09:01.841 5999.065 - 6024.271: 3.9572% ( 129) 00:09:01.841 6024.271 - 6049.477: 4.7554% ( 141) 00:09:01.841 6049.477 - 6074.683: 5.4518% ( 123) 00:09:01.841 6074.683 - 6099.889: 6.6916% ( 219) 00:09:01.841 6099.889 - 6125.095: 7.5238% ( 147) 00:09:01.841 6125.095 - 6150.302: 9.2844% ( 311) 00:09:01.841 6150.302 - 6175.508: 10.2638% ( 173) 00:09:01.841 6175.508 - 6200.714: 11.6735% ( 249) 00:09:01.841 6200.714 - 6225.920: 13.0944% ( 251) 00:09:01.841 6225.920 - 6251.126: 14.7815% ( 298) 00:09:01.841 6251.126 - 6276.332: 16.7799% ( 353) 00:09:01.841 6276.332 - 6301.538: 18.2462% ( 259) 00:09:01.841 6301.538 - 6326.745: 20.0408% ( 317) 00:09:01.841 6326.745 - 6351.951: 21.6146% ( 278) 00:09:01.841 6351.951 - 6377.157: 23.1544% ( 272) 00:09:01.841 6377.157 - 6402.363: 25.2264% ( 366) 00:09:01.841 6402.363 - 6427.569: 27.7457% ( 445) 00:09:01.841 6427.569 - 6452.775: 29.1893% ( 255) 00:09:01.841 6452.775 - 6503.188: 32.8974% ( 655) 00:09:01.841 6503.188 - 6553.600: 37.0075% ( 726) 00:09:01.841 6553.600 - 6604.012: 41.4176% ( 779) 00:09:01.841 6604.012 - 6654.425: 46.5240% ( 902) 00:09:01.841 6654.425 - 6704.837: 51.0700% ( 803) 00:09:01.841 6704.837 - 6755.249: 55.2027% ( 730) 00:09:01.841 6755.249 - 6805.662: 58.2994% ( 547) 00:09:01.841 6805.662 - 6856.074: 61.1356% ( 501) 00:09:01.841 6856.074 - 6906.486: 64.3116% ( 561) 00:09:01.841 6906.486 - 6956.898: 67.1139% ( 495) 00:09:01.841 6956.898 - 7007.311: 69.1180% ( 354) 00:09:01.841 7007.311 - 7057.723: 70.7880% ( 295) 00:09:01.841 7057.723 - 7108.135: 72.6449% ( 328) 00:09:01.841 7108.135 - 7158.548: 74.0206% ( 243) 00:09:01.841 7158.548 - 7208.960: 75.5774% ( 275) 00:09:01.841 7208.960 - 7259.372: 77.1003% ( 269) 00:09:01.841 7259.372 - 7309.785: 78.6402% ( 272) 00:09:01.841 7309.785 - 7360.197: 79.7271% ( 192) 00:09:01.841 7360.197 - 7410.609: 80.7009% ( 172) 00:09:01.841 7410.609 - 7461.022: 81.3519% ( 115) 00:09:01.841 7461.022 - 7511.434: 81.8331% ( 85) 00:09:01.841 7511.434 - 7561.846: 83.0050% ( 207) 00:09:01.841 7561.846 - 7612.258: 83.5202% ( 91) 00:09:01.841 7612.258 - 7662.671: 83.8825% ( 64) 00:09:01.841 7662.671 - 7713.083: 84.2391% ( 63) 00:09:01.841 7713.083 - 7763.495: 84.8958% ( 116) 00:09:01.841 7763.495 - 7813.908: 85.5639% ( 118) 00:09:01.841 7813.908 - 7864.320: 85.8073% ( 43) 00:09:01.841 7864.320 - 7914.732: 86.0451% ( 42) 00:09:01.841 7914.732 - 7965.145: 86.1696% ( 22) 00:09:01.841 7965.145 - 8015.557: 86.3168% ( 26) 00:09:01.841 8015.557 - 8065.969: 86.4980% ( 32) 00:09:01.841 8065.969 - 8116.382: 86.5942% ( 17) 00:09:01.841 8116.382 - 8166.794: 86.6904% ( 17) 00:09:01.841 8166.794 - 8217.206: 86.7980% ( 19) 00:09:01.841 8217.206 - 8267.618: 86.9452% ( 26) 00:09:01.841 8267.618 - 8318.031: 87.2849% ( 60) 00:09:01.841 8318.031 - 8368.443: 87.4490% ( 29) 00:09:01.841 8368.443 - 8418.855: 87.5510% ( 18) 00:09:01.841 8418.855 - 8469.268: 87.6981% ( 26) 00:09:01.841 8469.268 - 8519.680: 87.8397% ( 25) 00:09:01.841 8519.680 - 8570.092: 87.9642% ( 22) 00:09:01.841 8570.092 - 8620.505: 88.1227% ( 28) 00:09:01.841 8620.505 - 8670.917: 88.2473% ( 22) 00:09:01.841 8670.917 - 8721.329: 88.4511% ( 36) 00:09:01.841 8721.329 - 8771.742: 88.5926% ( 25) 00:09:01.841 8771.742 - 8822.154: 88.7228% ( 23) 00:09:01.841 8822.154 - 8872.566: 88.8361% ( 20) 00:09:01.841 8872.566 - 8922.978: 88.9493% ( 20) 00:09:01.841 8922.978 - 8973.391: 89.0738% ( 22) 00:09:01.841 8973.391 - 9023.803: 89.1757% ( 18) 00:09:01.841 9023.803 - 9074.215: 89.2380% ( 11) 00:09:01.841 9074.215 - 9124.628: 89.3059% ( 12) 00:09:01.841 9124.628 - 9175.040: 89.3852% ( 14) 00:09:01.841 9175.040 - 9225.452: 89.4531% ( 12) 00:09:01.841 9225.452 - 9275.865: 89.5267% ( 13) 00:09:01.841 9275.865 - 9326.277: 89.6060% ( 14) 00:09:01.841 9326.277 - 9376.689: 89.6852% ( 14) 00:09:01.841 9376.689 - 9427.102: 89.7192% ( 6) 00:09:01.841 9427.102 - 9477.514: 89.8098% ( 16) 00:09:01.841 9477.514 - 9527.926: 89.8721% ( 11) 00:09:01.841 9527.926 - 9578.338: 89.9400% ( 12) 00:09:01.841 9578.338 - 9628.751: 90.0249% ( 15) 00:09:01.841 9628.751 - 9679.163: 90.1212% ( 17) 00:09:01.841 9679.163 - 9729.575: 90.2174% ( 17) 00:09:01.841 9729.575 - 9779.988: 90.3136% ( 17) 00:09:01.841 9779.988 - 9830.400: 90.4325% ( 21) 00:09:01.841 9830.400 - 9880.812: 90.6420% ( 37) 00:09:01.841 9880.812 - 9931.225: 90.7326% ( 16) 00:09:01.841 9931.225 - 9981.637: 90.8231% ( 16) 00:09:01.841 9981.637 - 10032.049: 90.9137% ( 16) 00:09:01.841 10032.049 - 10082.462: 91.0043% ( 16) 00:09:01.841 10082.462 - 10132.874: 91.0949% ( 16) 00:09:01.841 10132.874 - 10183.286: 91.2251% ( 23) 00:09:01.841 10183.286 - 10233.698: 91.3157% ( 16) 00:09:01.841 10233.698 - 10284.111: 91.4176% ( 18) 00:09:01.841 10284.111 - 10334.523: 91.5308% ( 20) 00:09:01.841 10334.523 - 10384.935: 91.6327% ( 18) 00:09:01.841 10384.935 - 10435.348: 91.7346% ( 18) 00:09:01.841 10435.348 - 10485.760: 91.8308% ( 17) 00:09:01.841 10485.760 - 10536.172: 91.9327% ( 18) 00:09:01.841 10536.172 - 10586.585: 92.0007% ( 12) 00:09:01.841 10586.585 - 10636.997: 92.0743% ( 13) 00:09:01.841 10636.997 - 10687.409: 92.1422% ( 12) 00:09:01.841 10687.409 - 10737.822: 92.2101% ( 12) 00:09:01.841 10737.822 - 10788.234: 92.2894% ( 14) 00:09:01.841 10788.234 - 10838.646: 92.3800% ( 16) 00:09:01.841 10838.646 - 10889.058: 92.4932% ( 20) 00:09:01.841 10889.058 - 10939.471: 92.5951% ( 18) 00:09:01.841 10939.471 - 10989.883: 92.6913% ( 17) 00:09:01.841 10989.883 - 11040.295: 92.7933% ( 18) 00:09:01.841 11040.295 - 11090.708: 92.9348% ( 25) 00:09:01.841 11090.708 - 11141.120: 93.0820% ( 26) 00:09:01.841 11141.120 - 11191.532: 93.4330% ( 62) 00:09:01.841 11191.532 - 11241.945: 93.6028% ( 30) 00:09:01.841 11241.945 - 11292.357: 93.7160% ( 20) 00:09:01.841 11292.357 - 11342.769: 93.8406% ( 22) 00:09:01.841 11342.769 - 11393.182: 93.9878% ( 26) 00:09:01.841 11393.182 - 11443.594: 94.1010% ( 20) 00:09:01.841 11443.594 - 11494.006: 94.2029% ( 18) 00:09:01.841 11494.006 - 11544.418: 94.3274% ( 22) 00:09:01.841 11544.418 - 11594.831: 94.4577% ( 23) 00:09:01.841 11594.831 - 11645.243: 94.5539% ( 17) 00:09:01.841 11645.243 - 11695.655: 94.6332% ( 14) 00:09:01.841 11695.655 - 11746.068: 94.7011% ( 12) 00:09:01.841 11746.068 - 11796.480: 94.7634% ( 11) 00:09:01.841 11796.480 - 11846.892: 94.8030% ( 7) 00:09:01.841 11846.892 - 11897.305: 94.8426% ( 7) 00:09:01.841 11897.305 - 11947.717: 94.8766% ( 6) 00:09:01.841 11947.717 - 11998.129: 94.9106% ( 6) 00:09:01.841 11998.129 - 12048.542: 94.9389% ( 5) 00:09:01.841 12048.542 - 12098.954: 94.9672% ( 5) 00:09:01.841 12098.954 - 12149.366: 95.0068% ( 7) 00:09:01.841 12149.366 - 12199.778: 95.0351% ( 5) 00:09:01.841 12199.778 - 12250.191: 95.0634% ( 5) 00:09:01.841 12250.191 - 12300.603: 95.0861% ( 4) 00:09:01.841 12300.603 - 12351.015: 95.1144% ( 5) 00:09:01.841 12351.015 - 12401.428: 95.1370% ( 4) 00:09:01.841 12401.428 - 12451.840: 95.1653% ( 5) 00:09:01.841 12451.840 - 12502.252: 95.1936% ( 5) 00:09:01.841 12502.252 - 12552.665: 95.2163% ( 4) 00:09:01.841 12552.665 - 12603.077: 95.2219% ( 1) 00:09:01.841 12603.077 - 12653.489: 95.2672% ( 8) 00:09:01.841 12653.489 - 12703.902: 95.3465% ( 14) 00:09:01.841 12703.902 - 12754.314: 95.4540% ( 19) 00:09:01.841 12754.314 - 12804.726: 95.5503% ( 17) 00:09:01.841 12804.726 - 12855.138: 95.6295% ( 14) 00:09:01.841 12855.138 - 12905.551: 95.8447% ( 38) 00:09:01.841 12905.551 - 13006.375: 96.1957% ( 62) 00:09:01.841 13006.375 - 13107.200: 96.4957% ( 53) 00:09:01.841 13107.200 - 13208.025: 96.7052% ( 37) 00:09:01.841 13208.025 - 13308.849: 96.8297% ( 22) 00:09:01.841 13308.849 - 13409.674: 96.9712% ( 25) 00:09:01.841 13409.674 - 13510.498: 97.1071% ( 24) 00:09:01.841 13510.498 - 13611.323: 97.2373% ( 23) 00:09:01.841 13611.323 - 13712.148: 97.3449% ( 19) 00:09:01.841 13712.148 - 13812.972: 97.4864% ( 25) 00:09:01.841 13812.972 - 13913.797: 97.6223% ( 24) 00:09:01.841 13913.797 - 14014.622: 97.7638% ( 25) 00:09:01.841 14014.622 - 14115.446: 97.9223% ( 28) 00:09:01.841 14115.446 - 14216.271: 98.0525% ( 23) 00:09:01.841 14216.271 - 14317.095: 98.1431% ( 16) 00:09:01.841 14317.095 - 14417.920: 98.2507% ( 19) 00:09:01.841 14417.920 - 14518.745: 98.3413% ( 16) 00:09:01.841 14518.745 - 14619.569: 98.4715% ( 23) 00:09:01.842 14619.569 - 14720.394: 98.6300% ( 28) 00:09:01.842 14720.394 - 14821.218: 98.8111% ( 32) 00:09:01.842 14821.218 - 14922.043: 98.9527% ( 25) 00:09:01.842 14922.043 - 15022.868: 99.0319% ( 14) 00:09:01.842 15022.868 - 15123.692: 99.1112% ( 14) 00:09:01.842 15123.692 - 15224.517: 99.1791% ( 12) 00:09:01.842 15224.517 - 15325.342: 99.2867% ( 19) 00:09:01.842 15325.342 - 15426.166: 99.4169% ( 23) 00:09:01.842 15426.166 - 15526.991: 99.5245% ( 19) 00:09:01.842 15526.991 - 15627.815: 99.6433% ( 21) 00:09:01.842 15627.815 - 15728.640: 99.7905% ( 26) 00:09:01.842 15728.640 - 15829.465: 99.8528% ( 11) 00:09:01.842 15829.465 - 15930.289: 99.9038% ( 9) 00:09:01.842 15930.289 - 16031.114: 99.9604% ( 10) 00:09:01.842 16031.114 - 16131.938: 100.0000% ( 7) 00:09:01.842 00:09:01.842 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:01.842 ============================================================================== 00:09:01.842 Range in us Cumulative IO count 00:09:01.842 5394.117 - 5419.323: 0.0113% ( 2) 00:09:01.842 5419.323 - 5444.529: 0.0170% ( 1) 00:09:01.842 5444.529 - 5469.735: 0.0226% ( 1) 00:09:01.842 5469.735 - 5494.942: 0.0283% ( 1) 00:09:01.842 5545.354 - 5570.560: 0.0510% ( 4) 00:09:01.842 5570.560 - 5595.766: 0.0623% ( 2) 00:09:01.842 5595.766 - 5620.972: 0.0679% ( 1) 00:09:01.842 5646.178 - 5671.385: 0.0736% ( 1) 00:09:01.842 5671.385 - 5696.591: 0.0906% ( 3) 00:09:01.842 5696.591 - 5721.797: 0.1132% ( 4) 00:09:01.842 5721.797 - 5747.003: 0.1642% ( 9) 00:09:01.842 5747.003 - 5772.209: 0.2491% ( 15) 00:09:01.842 5772.209 - 5797.415: 0.3793% ( 23) 00:09:01.842 5797.415 - 5822.622: 0.5265% ( 26) 00:09:01.842 5822.622 - 5847.828: 0.7360% ( 37) 00:09:01.842 5847.828 - 5873.034: 0.9851% ( 44) 00:09:01.842 5873.034 - 5898.240: 1.2851% ( 53) 00:09:01.842 5898.240 - 5923.446: 1.5908% ( 54) 00:09:01.842 5923.446 - 5948.652: 1.9644% ( 66) 00:09:01.842 5948.652 - 5973.858: 2.3381% ( 66) 00:09:01.842 5973.858 - 5999.065: 2.7853% ( 79) 00:09:01.842 5999.065 - 6024.271: 3.2665% ( 85) 00:09:01.842 6024.271 - 6049.477: 4.3082% ( 184) 00:09:01.842 6049.477 - 6074.683: 4.9196% ( 108) 00:09:01.842 6074.683 - 6099.889: 5.5990% ( 120) 00:09:01.842 6099.889 - 6125.095: 6.3802% ( 138) 00:09:01.842 6125.095 - 6150.302: 7.5408% ( 205) 00:09:01.842 6150.302 - 6175.508: 9.2901% ( 309) 00:09:01.842 6175.508 - 6200.714: 10.6601% ( 242) 00:09:01.842 6200.714 - 6225.920: 12.4434% ( 315) 00:09:01.842 6225.920 - 6251.126: 14.6060% ( 382) 00:09:01.842 6251.126 - 6276.332: 16.7742% ( 383) 00:09:01.842 6276.332 - 6301.538: 18.5236% ( 309) 00:09:01.842 6301.538 - 6326.745: 20.6578% ( 377) 00:09:01.842 6326.745 - 6351.951: 22.2713% ( 285) 00:09:01.842 6351.951 - 6377.157: 23.9866% ( 303) 00:09:01.842 6377.157 - 6402.363: 25.4755% ( 263) 00:09:01.842 6402.363 - 6427.569: 27.5136% ( 360) 00:09:01.842 6427.569 - 6452.775: 29.3478% ( 324) 00:09:01.842 6452.775 - 6503.188: 33.3673% ( 710) 00:09:01.842 6503.188 - 6553.600: 37.8850% ( 798) 00:09:01.842 6553.600 - 6604.012: 42.5045% ( 816) 00:09:01.842 6604.012 - 6654.425: 46.7335% ( 747) 00:09:01.842 6654.425 - 6704.837: 50.4755% ( 661) 00:09:01.842 6704.837 - 6755.249: 54.5969% ( 728) 00:09:01.842 6755.249 - 6805.662: 58.4579% ( 682) 00:09:01.842 6805.662 - 6856.074: 61.9226% ( 612) 00:09:01.842 6856.074 - 6906.486: 64.0851% ( 382) 00:09:01.842 6906.486 - 6956.898: 66.3553% ( 401) 00:09:01.842 6956.898 - 7007.311: 68.4839% ( 376) 00:09:01.842 7007.311 - 7057.723: 70.7824% ( 406) 00:09:01.842 7057.723 - 7108.135: 72.8997% ( 374) 00:09:01.842 7108.135 - 7158.548: 74.7849% ( 333) 00:09:01.842 7158.548 - 7208.960: 76.6587% ( 331) 00:09:01.842 7208.960 - 7259.372: 78.4760% ( 321) 00:09:01.842 7259.372 - 7309.785: 79.6705% ( 211) 00:09:01.842 7309.785 - 7360.197: 80.7858% ( 197) 00:09:01.842 7360.197 - 7410.609: 82.2690% ( 262) 00:09:01.842 7410.609 - 7461.022: 83.1239% ( 151) 00:09:01.842 7461.022 - 7511.434: 84.0466% ( 163) 00:09:01.842 7511.434 - 7561.846: 84.5165% ( 83) 00:09:01.842 7561.846 - 7612.258: 85.0091% ( 87) 00:09:01.842 7612.258 - 7662.671: 85.5922% ( 103) 00:09:01.842 7662.671 - 7713.083: 85.8922% ( 53) 00:09:01.842 7713.083 - 7763.495: 86.1243% ( 41) 00:09:01.842 7763.495 - 7813.908: 86.3847% ( 46) 00:09:01.842 7813.908 - 7864.320: 86.6168% ( 41) 00:09:01.842 7864.320 - 7914.732: 86.8150% ( 35) 00:09:01.842 7914.732 - 7965.145: 87.0641% ( 44) 00:09:01.842 7965.145 - 8015.557: 87.3188% ( 45) 00:09:01.842 8015.557 - 8065.969: 87.6812% ( 64) 00:09:01.842 8065.969 - 8116.382: 87.9076% ( 40) 00:09:01.842 8116.382 - 8166.794: 88.1227% ( 38) 00:09:01.842 8166.794 - 8217.206: 88.3152% ( 34) 00:09:01.842 8217.206 - 8267.618: 88.4907% ( 31) 00:09:01.842 8267.618 - 8318.031: 88.6096% ( 21) 00:09:01.842 8318.031 - 8368.443: 88.7115% ( 18) 00:09:01.842 8368.443 - 8418.855: 88.8304% ( 21) 00:09:01.842 8418.855 - 8469.268: 88.9606% ( 23) 00:09:01.842 8469.268 - 8519.680: 89.1870% ( 40) 00:09:01.842 8519.680 - 8570.092: 89.3625% ( 31) 00:09:01.842 8570.092 - 8620.505: 89.4701% ( 19) 00:09:01.842 8620.505 - 8670.917: 89.5550% ( 15) 00:09:01.842 8670.917 - 8721.329: 89.6456% ( 16) 00:09:01.842 8721.329 - 8771.742: 89.7475% ( 18) 00:09:01.842 8771.742 - 8822.154: 89.9287% ( 32) 00:09:01.842 8822.154 - 8872.566: 90.0532% ( 22) 00:09:01.842 8872.566 - 8922.978: 90.1381% ( 15) 00:09:01.842 8922.978 - 8973.391: 90.2400% ( 18) 00:09:01.842 8973.391 - 9023.803: 90.2627% ( 4) 00:09:01.842 9023.803 - 9074.215: 90.2797% ( 3) 00:09:01.842 9074.215 - 9124.628: 90.2853% ( 1) 00:09:01.842 9124.628 - 9175.040: 90.2966% ( 2) 00:09:01.842 9175.040 - 9225.452: 90.3136% ( 3) 00:09:01.842 9225.452 - 9275.865: 90.3363% ( 4) 00:09:01.842 9275.865 - 9326.277: 90.3646% ( 5) 00:09:01.842 9326.277 - 9376.689: 90.4042% ( 7) 00:09:01.842 9376.689 - 9427.102: 90.4325% ( 5) 00:09:01.842 9427.102 - 9477.514: 90.4665% ( 6) 00:09:01.842 9477.514 - 9527.926: 90.5005% ( 6) 00:09:01.842 9527.926 - 9578.338: 90.5288% ( 5) 00:09:01.842 9578.338 - 9628.751: 90.5627% ( 6) 00:09:01.842 9628.751 - 9679.163: 90.5910% ( 5) 00:09:01.842 9679.163 - 9729.575: 90.6363% ( 8) 00:09:01.842 9729.575 - 9779.988: 90.7156% ( 14) 00:09:01.842 9779.988 - 9830.400: 90.8005% ( 15) 00:09:01.842 9830.400 - 9880.812: 90.8684% ( 12) 00:09:01.842 9880.812 - 9931.225: 90.9137% ( 8) 00:09:01.842 9931.225 - 9981.637: 90.9420% ( 5) 00:09:01.842 9981.637 - 10032.049: 90.9647% ( 4) 00:09:01.842 10032.049 - 10082.462: 90.9873% ( 4) 00:09:01.842 10082.462 - 10132.874: 91.0100% ( 4) 00:09:01.842 10132.874 - 10183.286: 91.0892% ( 14) 00:09:01.842 10183.286 - 10233.698: 91.1515% ( 11) 00:09:01.842 10233.698 - 10284.111: 91.2081% ( 10) 00:09:01.842 10284.111 - 10334.523: 91.2874% ( 14) 00:09:01.842 10334.523 - 10384.935: 91.3779% ( 16) 00:09:01.842 10384.935 - 10435.348: 91.4629% ( 15) 00:09:01.842 10435.348 - 10485.760: 91.5704% ( 19) 00:09:01.842 10485.760 - 10536.172: 91.7912% ( 39) 00:09:01.842 10536.172 - 10586.585: 91.8591% ( 12) 00:09:01.842 10586.585 - 10636.997: 91.9214% ( 11) 00:09:01.842 10636.997 - 10687.409: 91.9837% ( 11) 00:09:01.842 10687.409 - 10737.822: 92.0177% ( 6) 00:09:01.842 10737.822 - 10788.234: 92.0630% ( 8) 00:09:01.842 10788.234 - 10838.646: 92.1082% ( 8) 00:09:01.842 10838.646 - 10889.058: 92.1479% ( 7) 00:09:01.842 10889.058 - 10939.471: 92.1875% ( 7) 00:09:01.842 10939.471 - 10989.883: 92.2215% ( 6) 00:09:01.842 10989.883 - 11040.295: 92.2611% ( 7) 00:09:01.842 11040.295 - 11090.708: 92.3120% ( 9) 00:09:01.842 11090.708 - 11141.120: 92.4083% ( 17) 00:09:01.842 11141.120 - 11191.532: 92.5159% ( 19) 00:09:01.842 11191.532 - 11241.945: 92.6404% ( 22) 00:09:01.842 11241.945 - 11292.357: 92.7763% ( 24) 00:09:01.842 11292.357 - 11342.769: 92.8725% ( 17) 00:09:01.842 11342.769 - 11393.182: 92.9971% ( 22) 00:09:01.842 11393.182 - 11443.594: 93.0820% ( 15) 00:09:01.842 11443.594 - 11494.006: 93.1442% ( 11) 00:09:01.842 11494.006 - 11544.418: 93.1839% ( 7) 00:09:01.843 11544.418 - 11594.831: 93.2009% ( 3) 00:09:01.843 11594.831 - 11645.243: 93.2801% ( 14) 00:09:01.843 11645.243 - 11695.655: 93.3764% ( 17) 00:09:01.843 11695.655 - 11746.068: 93.4443% ( 12) 00:09:01.843 11746.068 - 11796.480: 93.5405% ( 17) 00:09:01.843 11796.480 - 11846.892: 93.6141% ( 13) 00:09:01.843 11846.892 - 11897.305: 93.7047% ( 16) 00:09:01.843 11897.305 - 11947.717: 93.7840% ( 14) 00:09:01.843 11947.717 - 11998.129: 93.8689% ( 15) 00:09:01.843 11998.129 - 12048.542: 93.9255% ( 10) 00:09:01.843 12048.542 - 12098.954: 94.0274% ( 18) 00:09:01.843 12098.954 - 12149.366: 94.1010% ( 13) 00:09:01.843 12149.366 - 12199.778: 94.1916% ( 16) 00:09:01.843 12199.778 - 12250.191: 94.3218% ( 23) 00:09:01.843 12250.191 - 12300.603: 94.3954% ( 13) 00:09:01.843 12300.603 - 12351.015: 94.5086% ( 20) 00:09:01.843 12351.015 - 12401.428: 94.6162% ( 19) 00:09:01.843 12401.428 - 12451.840: 94.6898% ( 13) 00:09:01.843 12451.840 - 12502.252: 94.7747% ( 15) 00:09:01.843 12502.252 - 12552.665: 95.1030% ( 58) 00:09:01.843 12552.665 - 12603.077: 95.3068% ( 36) 00:09:01.843 12603.077 - 12653.489: 95.3918% ( 15) 00:09:01.843 12653.489 - 12703.902: 95.4823% ( 16) 00:09:01.843 12703.902 - 12754.314: 95.5842% ( 18) 00:09:01.843 12754.314 - 12804.726: 95.6861% ( 18) 00:09:01.843 12804.726 - 12855.138: 95.7711% ( 15) 00:09:01.843 12855.138 - 12905.551: 95.8333% ( 11) 00:09:01.843 12905.551 - 13006.375: 95.9409% ( 19) 00:09:01.843 13006.375 - 13107.200: 96.0428% ( 18) 00:09:01.843 13107.200 - 13208.025: 96.1730% ( 23) 00:09:01.843 13208.025 - 13308.849: 96.2862% ( 20) 00:09:01.843 13308.849 - 13409.674: 96.4164% ( 23) 00:09:01.843 13409.674 - 13510.498: 96.5580% ( 25) 00:09:01.843 13510.498 - 13611.323: 96.6938% ( 24) 00:09:01.843 13611.323 - 13712.148: 96.8467% ( 27) 00:09:01.843 13712.148 - 13812.972: 97.0165% ( 30) 00:09:01.843 13812.972 - 13913.797: 97.1864% ( 30) 00:09:01.843 13913.797 - 14014.622: 97.3732% ( 33) 00:09:01.843 14014.622 - 14115.446: 97.5543% ( 32) 00:09:01.843 14115.446 - 14216.271: 97.7978% ( 43) 00:09:01.843 14216.271 - 14317.095: 97.9620% ( 29) 00:09:01.843 14317.095 - 14417.920: 98.1431% ( 32) 00:09:01.843 14417.920 - 14518.745: 98.3469% ( 36) 00:09:01.843 14518.745 - 14619.569: 98.5281% ( 32) 00:09:01.843 14619.569 - 14720.394: 98.6866% ( 28) 00:09:01.843 14720.394 - 14821.218: 98.8621% ( 31) 00:09:01.843 14821.218 - 14922.043: 98.9980% ( 24) 00:09:01.843 14922.043 - 15022.868: 99.1055% ( 19) 00:09:01.843 15022.868 - 15123.692: 99.1961% ( 16) 00:09:01.843 15123.692 - 15224.517: 99.2923% ( 17) 00:09:01.843 15224.517 - 15325.342: 99.3603% ( 12) 00:09:01.843 15325.342 - 15426.166: 99.4169% ( 10) 00:09:01.843 15426.166 - 15526.991: 99.4678% ( 9) 00:09:01.843 15526.991 - 15627.815: 99.6150% ( 26) 00:09:01.843 15627.815 - 15728.640: 99.8585% ( 43) 00:09:01.843 15728.640 - 15829.465: 99.9038% ( 8) 00:09:01.843 15829.465 - 15930.289: 99.9490% ( 8) 00:09:01.843 15930.289 - 16031.114: 99.9830% ( 6) 00:09:01.843 16031.114 - 16131.938: 100.0000% ( 3) 00:09:01.843 00:09:01.843 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:01.843 ============================================================================== 00:09:01.843 Range in us Cumulative IO count 00:09:01.843 5343.705 - 5368.911: 0.0113% ( 2) 00:09:01.843 5394.117 - 5419.323: 0.0170% ( 1) 00:09:01.843 5419.323 - 5444.529: 0.0226% ( 1) 00:09:01.843 5444.529 - 5469.735: 0.0283% ( 1) 00:09:01.843 5494.942 - 5520.148: 0.0453% ( 3) 00:09:01.843 5520.148 - 5545.354: 0.0510% ( 1) 00:09:01.843 5545.354 - 5570.560: 0.0566% ( 1) 00:09:01.843 5595.766 - 5620.972: 0.0679% ( 2) 00:09:01.843 5620.972 - 5646.178: 0.0906% ( 4) 00:09:01.843 5646.178 - 5671.385: 0.1019% ( 2) 00:09:01.843 5671.385 - 5696.591: 0.1245% ( 4) 00:09:01.843 5696.591 - 5721.797: 0.1925% ( 12) 00:09:01.843 5721.797 - 5747.003: 0.2717% ( 14) 00:09:01.843 5747.003 - 5772.209: 0.3963% ( 22) 00:09:01.843 5772.209 - 5797.415: 0.5435% ( 26) 00:09:01.843 5797.415 - 5822.622: 0.7360% ( 34) 00:09:01.843 5822.622 - 5847.828: 0.9851% ( 44) 00:09:01.843 5847.828 - 5873.034: 1.2398% ( 45) 00:09:01.843 5873.034 - 5898.240: 1.5229% ( 50) 00:09:01.843 5898.240 - 5923.446: 1.8229% ( 53) 00:09:01.843 5923.446 - 5948.652: 2.1852% ( 64) 00:09:01.843 5948.652 - 5973.858: 2.6155% ( 76) 00:09:01.843 5973.858 - 5999.065: 3.3514% ( 130) 00:09:01.843 5999.065 - 6024.271: 3.9685% ( 109) 00:09:01.843 6024.271 - 6049.477: 4.7328% ( 135) 00:09:01.843 6049.477 - 6074.683: 5.4065% ( 119) 00:09:01.843 6074.683 - 6099.889: 6.3632% ( 169) 00:09:01.843 6099.889 - 6125.095: 7.3992% ( 183) 00:09:01.843 6125.095 - 6150.302: 9.1712% ( 313) 00:09:01.843 6150.302 - 6175.508: 10.2355% ( 188) 00:09:01.843 6175.508 - 6200.714: 11.2149% ( 173) 00:09:01.843 6200.714 - 6225.920: 13.0605% ( 326) 00:09:01.843 6225.920 - 6251.126: 14.4531% ( 246) 00:09:01.843 6251.126 - 6276.332: 15.8401% ( 245) 00:09:01.843 6276.332 - 6301.538: 17.6857% ( 326) 00:09:01.843 6301.538 - 6326.745: 19.3784% ( 299) 00:09:01.843 6326.745 - 6351.951: 21.9373% ( 452) 00:09:01.843 6351.951 - 6377.157: 24.1452% ( 390) 00:09:01.843 6377.157 - 6402.363: 26.1606% ( 356) 00:09:01.843 6402.363 - 6427.569: 28.0910% ( 341) 00:09:01.843 6427.569 - 6452.775: 29.6875% ( 282) 00:09:01.843 6452.775 - 6503.188: 33.4692% ( 668) 00:09:01.843 6503.188 - 6553.600: 37.2452% ( 667) 00:09:01.843 6553.600 - 6604.012: 42.1649% ( 869) 00:09:01.843 6604.012 - 6654.425: 47.3222% ( 911) 00:09:01.843 6654.425 - 6704.837: 52.0777% ( 840) 00:09:01.843 6704.837 - 6755.249: 55.7065% ( 641) 00:09:01.843 6755.249 - 6805.662: 58.2428% ( 448) 00:09:01.843 6805.662 - 6856.074: 62.0471% ( 672) 00:09:01.843 6856.074 - 6906.486: 64.7475% ( 477) 00:09:01.843 6906.486 - 6956.898: 66.7799% ( 359) 00:09:01.843 6956.898 - 7007.311: 69.2822% ( 442) 00:09:01.843 7007.311 - 7057.723: 71.4617% ( 385) 00:09:01.843 7057.723 - 7108.135: 73.3865% ( 340) 00:09:01.843 7108.135 - 7158.548: 75.3340% ( 344) 00:09:01.843 7158.548 - 7208.960: 76.6135% ( 226) 00:09:01.843 7208.960 - 7259.372: 77.8306% ( 215) 00:09:01.843 7259.372 - 7309.785: 80.2197% ( 422) 00:09:01.843 7309.785 - 7360.197: 82.0935% ( 331) 00:09:01.843 7360.197 - 7410.609: 82.9823% ( 157) 00:09:01.843 7410.609 - 7461.022: 83.7523% ( 136) 00:09:01.843 7461.022 - 7511.434: 84.2788% ( 93) 00:09:01.843 7511.434 - 7561.846: 84.6128% ( 59) 00:09:01.843 7561.846 - 7612.258: 84.8845% ( 48) 00:09:01.843 7612.258 - 7662.671: 85.2015% ( 56) 00:09:01.843 7662.671 - 7713.083: 85.6941% ( 87) 00:09:01.843 7713.083 - 7763.495: 85.8979% ( 36) 00:09:01.843 7763.495 - 7813.908: 86.0677% ( 30) 00:09:01.843 7813.908 - 7864.320: 86.2319% ( 29) 00:09:01.843 7864.320 - 7914.732: 86.4753% ( 43) 00:09:01.843 7914.732 - 7965.145: 86.7188% ( 43) 00:09:01.843 7965.145 - 8015.557: 87.0075% ( 51) 00:09:01.843 8015.557 - 8065.969: 87.2905% ( 50) 00:09:01.843 8065.969 - 8116.382: 87.5000% ( 37) 00:09:01.843 8116.382 - 8166.794: 87.6925% ( 34) 00:09:01.843 8166.794 - 8217.206: 87.8736% ( 32) 00:09:01.843 8217.206 - 8267.618: 88.0435% ( 30) 00:09:01.843 8267.618 - 8318.031: 88.3775% ( 59) 00:09:01.843 8318.031 - 8368.443: 88.7681% ( 69) 00:09:01.843 8368.443 - 8418.855: 88.9719% ( 36) 00:09:01.843 8418.855 - 8469.268: 89.1587% ( 33) 00:09:01.843 8469.268 - 8519.680: 89.3229% ( 29) 00:09:01.843 8519.680 - 8570.092: 89.4361% ( 20) 00:09:01.843 8570.092 - 8620.505: 89.5324% ( 17) 00:09:01.843 8620.505 - 8670.917: 89.6683% ( 24) 00:09:01.843 8670.917 - 8721.329: 89.8154% ( 26) 00:09:01.843 8721.329 - 8771.742: 89.9400% ( 22) 00:09:01.843 8771.742 - 8822.154: 90.0928% ( 27) 00:09:01.843 8822.154 - 8872.566: 90.1834% ( 16) 00:09:01.843 8872.566 - 8922.978: 90.2570% ( 13) 00:09:01.843 8922.978 - 8973.391: 90.3363% ( 14) 00:09:01.843 8973.391 - 9023.803: 90.4155% ( 14) 00:09:01.843 9023.803 - 9074.215: 90.4891% ( 13) 00:09:01.843 9074.215 - 9124.628: 90.5514% ( 11) 00:09:01.843 9124.628 - 9175.040: 90.5910% ( 7) 00:09:01.843 9175.040 - 9225.452: 90.6646% ( 13) 00:09:01.843 9225.452 - 9275.865: 90.7326% ( 12) 00:09:01.843 9275.865 - 9326.277: 90.7892% ( 10) 00:09:01.843 9326.277 - 9376.689: 90.8288% ( 7) 00:09:01.843 9376.689 - 9427.102: 90.8854% ( 10) 00:09:01.843 9427.102 - 9477.514: 90.9420% ( 10) 00:09:01.843 9477.514 - 9527.926: 90.9930% ( 9) 00:09:01.843 9527.926 - 9578.338: 91.0496% ( 10) 00:09:01.843 9578.338 - 9628.751: 91.1119% ( 11) 00:09:01.843 9628.751 - 9679.163: 91.1741% ( 11) 00:09:01.843 9679.163 - 9729.575: 91.2024% ( 5) 00:09:01.843 9729.575 - 9779.988: 91.2194% ( 3) 00:09:01.843 9779.988 - 9830.400: 91.2421% ( 4) 00:09:01.843 9830.400 - 9880.812: 91.2647% ( 4) 00:09:01.843 9880.812 - 9931.225: 91.2817% ( 3) 00:09:01.843 9931.225 - 9981.637: 91.2874% ( 1) 00:09:01.843 9981.637 - 10032.049: 91.2987% ( 2) 00:09:01.843 10032.049 - 10082.462: 91.3043% ( 1) 00:09:01.843 10838.646 - 10889.058: 91.3383% ( 6) 00:09:01.843 10889.058 - 10939.471: 91.3723% ( 6) 00:09:01.843 10939.471 - 10989.883: 91.4289% ( 10) 00:09:01.843 10989.883 - 11040.295: 91.4855% ( 10) 00:09:01.843 11040.295 - 11090.708: 91.5648% ( 14) 00:09:01.843 11090.708 - 11141.120: 91.6440% ( 14) 00:09:01.843 11141.120 - 11191.532: 91.7516% ( 19) 00:09:01.843 11191.532 - 11241.945: 91.8308% ( 14) 00:09:01.843 11241.945 - 11292.357: 91.9384% ( 19) 00:09:01.843 11292.357 - 11342.769: 92.0403% ( 18) 00:09:01.843 11342.769 - 11393.182: 92.1762% ( 24) 00:09:01.843 11393.182 - 11443.594: 92.3687% ( 34) 00:09:01.843 11443.594 - 11494.006: 92.6008% ( 41) 00:09:01.844 11494.006 - 11544.418: 92.8385% ( 42) 00:09:01.844 11544.418 - 11594.831: 92.9518% ( 20) 00:09:01.844 11594.831 - 11645.243: 93.0367% ( 15) 00:09:01.844 11645.243 - 11695.655: 93.1386% ( 18) 00:09:01.844 11695.655 - 11746.068: 93.2462% ( 19) 00:09:01.844 11746.068 - 11796.480: 93.3537% ( 19) 00:09:01.844 11796.480 - 11846.892: 93.4500% ( 17) 00:09:01.844 11846.892 - 11897.305: 93.5405% ( 16) 00:09:01.844 11897.305 - 11947.717: 93.6255% ( 15) 00:09:01.844 11947.717 - 11998.129: 93.7443% ( 21) 00:09:01.844 11998.129 - 12048.542: 93.8349% ( 16) 00:09:01.844 12048.542 - 12098.954: 93.9651% ( 23) 00:09:01.844 12098.954 - 12149.366: 94.1406% ( 31) 00:09:01.844 12149.366 - 12199.778: 94.2878% ( 26) 00:09:01.844 12199.778 - 12250.191: 94.4463% ( 28) 00:09:01.844 12250.191 - 12300.603: 94.6275% ( 32) 00:09:01.844 12300.603 - 12351.015: 94.7634% ( 24) 00:09:01.844 12351.015 - 12401.428: 94.8936% ( 23) 00:09:01.844 12401.428 - 12451.840: 95.0238% ( 23) 00:09:01.844 12451.840 - 12502.252: 95.2276% ( 36) 00:09:01.844 12502.252 - 12552.665: 95.5220% ( 52) 00:09:01.844 12552.665 - 12603.077: 95.6748% ( 27) 00:09:01.844 12603.077 - 12653.489: 95.8616% ( 33) 00:09:01.844 12653.489 - 12703.902: 96.0145% ( 27) 00:09:01.844 12703.902 - 12754.314: 96.1107% ( 17) 00:09:01.844 12754.314 - 12804.726: 96.2126% ( 18) 00:09:01.844 12804.726 - 12855.138: 96.2862% ( 13) 00:09:01.844 12855.138 - 12905.551: 96.3598% ( 13) 00:09:01.844 12905.551 - 13006.375: 96.5014% ( 25) 00:09:01.844 13006.375 - 13107.200: 96.6429% ( 25) 00:09:01.844 13107.200 - 13208.025: 96.7505% ( 19) 00:09:01.844 13208.025 - 13308.849: 96.8467% ( 17) 00:09:01.844 13308.849 - 13409.674: 96.9316% ( 15) 00:09:01.844 13409.674 - 13510.498: 96.9995% ( 12) 00:09:01.844 13510.498 - 13611.323: 97.1071% ( 19) 00:09:01.844 13611.323 - 13712.148: 97.2373% ( 23) 00:09:01.844 13712.148 - 13812.972: 97.3958% ( 28) 00:09:01.844 13812.972 - 13913.797: 97.6110% ( 38) 00:09:01.844 13913.797 - 14014.622: 97.7695% ( 28) 00:09:01.844 14014.622 - 14115.446: 97.9280% ( 28) 00:09:01.844 14115.446 - 14216.271: 98.0356% ( 19) 00:09:01.844 14216.271 - 14317.095: 98.0808% ( 8) 00:09:01.844 14317.095 - 14417.920: 98.1318% ( 9) 00:09:01.844 14417.920 - 14518.745: 98.1941% ( 11) 00:09:01.844 14518.745 - 14619.569: 98.2903% ( 17) 00:09:01.844 14619.569 - 14720.394: 98.3752% ( 15) 00:09:01.844 14720.394 - 14821.218: 98.4545% ( 14) 00:09:01.844 14821.218 - 14922.043: 98.5451% ( 16) 00:09:01.844 14922.043 - 15022.868: 98.6470% ( 18) 00:09:01.844 15022.868 - 15123.692: 98.7602% ( 20) 00:09:01.844 15123.692 - 15224.517: 98.8961% ( 24) 00:09:01.844 15224.517 - 15325.342: 99.0602% ( 29) 00:09:01.844 15325.342 - 15426.166: 99.4622% ( 71) 00:09:01.844 15426.166 - 15526.991: 99.6490% ( 33) 00:09:01.844 15526.991 - 15627.815: 99.7226% ( 13) 00:09:01.844 15627.815 - 15728.640: 99.7905% ( 12) 00:09:01.844 15728.640 - 15829.465: 99.8641% ( 13) 00:09:01.844 15829.465 - 15930.289: 99.9151% ( 9) 00:09:01.844 15930.289 - 16031.114: 99.9490% ( 6) 00:09:01.844 16031.114 - 16131.938: 99.9774% ( 5) 00:09:01.844 16131.938 - 16232.763: 100.0000% ( 4) 00:09:01.844 00:09:01.844 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:01.844 ============================================================================== 00:09:01.844 Range in us Cumulative IO count 00:09:01.844 5293.292 - 5318.498: 0.0113% ( 2) 00:09:01.844 5318.498 - 5343.705: 0.0170% ( 1) 00:09:01.844 5394.117 - 5419.323: 0.0226% ( 1) 00:09:01.844 5419.323 - 5444.529: 0.0283% ( 1) 00:09:01.844 5444.529 - 5469.735: 0.0340% ( 1) 00:09:01.844 5520.148 - 5545.354: 0.0453% ( 2) 00:09:01.844 5545.354 - 5570.560: 0.0566% ( 2) 00:09:01.844 5570.560 - 5595.766: 0.0793% ( 4) 00:09:01.844 5595.766 - 5620.972: 0.1076% ( 5) 00:09:01.844 5620.972 - 5646.178: 0.1302% ( 4) 00:09:01.844 5646.178 - 5671.385: 0.1529% ( 4) 00:09:01.844 5671.385 - 5696.591: 0.1981% ( 8) 00:09:01.844 5696.591 - 5721.797: 0.2321% ( 6) 00:09:01.844 5721.797 - 5747.003: 0.2944% ( 11) 00:09:01.844 5747.003 - 5772.209: 0.3850% ( 16) 00:09:01.844 5772.209 - 5797.415: 0.5038% ( 21) 00:09:01.844 5797.415 - 5822.622: 0.6624% ( 28) 00:09:01.844 5822.622 - 5847.828: 0.8209% ( 28) 00:09:01.844 5847.828 - 5873.034: 0.9851% ( 29) 00:09:01.844 5873.034 - 5898.240: 1.1606% ( 31) 00:09:01.844 5898.240 - 5923.446: 1.3870% ( 40) 00:09:01.844 5923.446 - 5948.652: 1.6135% ( 40) 00:09:01.844 5948.652 - 5973.858: 1.9588% ( 61) 00:09:01.844 5973.858 - 5999.065: 2.3494% ( 69) 00:09:01.844 5999.065 - 6024.271: 2.9325% ( 103) 00:09:01.844 6024.271 - 6049.477: 3.7874% ( 151) 00:09:01.844 6049.477 - 6074.683: 4.7328% ( 167) 00:09:01.844 6074.683 - 6099.889: 5.2933% ( 99) 00:09:01.844 6099.889 - 6125.095: 6.2783% ( 174) 00:09:01.844 6125.095 - 6150.302: 7.7049% ( 252) 00:09:01.844 6150.302 - 6175.508: 8.7409% ( 183) 00:09:01.844 6175.508 - 6200.714: 9.8053% ( 188) 00:09:01.844 6200.714 - 6225.920: 11.2319% ( 252) 00:09:01.844 6225.920 - 6251.126: 12.3981% ( 206) 00:09:01.844 6251.126 - 6276.332: 13.4794% ( 191) 00:09:01.844 6276.332 - 6301.538: 15.4212% ( 343) 00:09:01.844 6301.538 - 6326.745: 16.9554% ( 271) 00:09:01.844 6326.745 - 6351.951: 18.8293% ( 331) 00:09:01.844 6351.951 - 6377.157: 21.5523% ( 481) 00:09:01.844 6377.157 - 6402.363: 24.3999% ( 503) 00:09:01.844 6402.363 - 6427.569: 26.9418% ( 449) 00:09:01.844 6427.569 - 6452.775: 28.5383% ( 282) 00:09:01.844 6452.775 - 6503.188: 32.2067% ( 648) 00:09:01.844 6503.188 - 6553.600: 35.9375% ( 659) 00:09:01.844 6553.600 - 6604.012: 41.0269% ( 899) 00:09:01.844 6604.012 - 6654.425: 45.6352% ( 814) 00:09:01.844 6654.425 - 6704.837: 50.8152% ( 915) 00:09:01.844 6704.837 - 6755.249: 54.6932% ( 685) 00:09:01.844 6755.249 - 6805.662: 58.5994% ( 690) 00:09:01.844 6805.662 - 6856.074: 61.3904% ( 493) 00:09:01.844 6856.074 - 6906.486: 64.0568% ( 471) 00:09:01.844 6906.486 - 6956.898: 66.0949% ( 360) 00:09:01.844 6956.898 - 7007.311: 68.1273% ( 359) 00:09:01.844 7007.311 - 7057.723: 71.0881% ( 523) 00:09:01.844 7057.723 - 7108.135: 73.4375% ( 415) 00:09:01.844 7108.135 - 7158.548: 74.9887% ( 274) 00:09:01.844 7158.548 - 7208.960: 76.7040% ( 303) 00:09:01.844 7208.960 - 7259.372: 78.7081% ( 354) 00:09:01.844 7259.372 - 7309.785: 80.0781% ( 242) 00:09:01.844 7309.785 - 7360.197: 80.8990% ( 145) 00:09:01.844 7360.197 - 7410.609: 82.1615% ( 223) 00:09:01.844 7410.609 - 7461.022: 83.1748% ( 179) 00:09:01.844 7461.022 - 7511.434: 84.0523% ( 155) 00:09:01.844 7511.434 - 7561.846: 84.4769% ( 75) 00:09:01.844 7561.846 - 7612.258: 85.1676% ( 122) 00:09:01.844 7612.258 - 7662.671: 85.5016% ( 59) 00:09:01.844 7662.671 - 7713.083: 85.7846% ( 50) 00:09:01.844 7713.083 - 7763.495: 86.0620% ( 49) 00:09:01.844 7763.495 - 7813.908: 86.3451% ( 50) 00:09:01.844 7813.908 - 7864.320: 86.5999% ( 45) 00:09:01.844 7864.320 - 7914.732: 86.7980% ( 35) 00:09:01.844 7914.732 - 7965.145: 87.0414% ( 43) 00:09:01.844 7965.145 - 8015.557: 87.2905% ( 44) 00:09:01.844 8015.557 - 8065.969: 87.4774% ( 33) 00:09:01.844 8065.969 - 8116.382: 87.6529% ( 31) 00:09:01.844 8116.382 - 8166.794: 87.7661% ( 20) 00:09:01.844 8166.794 - 8217.206: 87.8623% ( 17) 00:09:01.844 8217.206 - 8267.618: 87.9642% ( 18) 00:09:01.844 8267.618 - 8318.031: 88.0605% ( 17) 00:09:01.844 8318.031 - 8368.443: 88.1793% ( 21) 00:09:01.844 8368.443 - 8418.855: 88.3152% ( 24) 00:09:01.844 8418.855 - 8469.268: 88.5020% ( 33) 00:09:01.844 8469.268 - 8519.680: 88.8870% ( 68) 00:09:01.844 8519.680 - 8570.092: 89.0172% ( 23) 00:09:01.844 8570.092 - 8620.505: 89.1474% ( 23) 00:09:01.844 8620.505 - 8670.917: 89.2550% ( 19) 00:09:01.844 8670.917 - 8721.329: 89.3909% ( 24) 00:09:01.844 8721.329 - 8771.742: 89.5097% ( 21) 00:09:01.844 8771.742 - 8822.154: 89.6230% ( 20) 00:09:01.844 8822.154 - 8872.566: 89.7022% ( 14) 00:09:01.844 8872.566 - 8922.978: 89.7758% ( 13) 00:09:01.844 8922.978 - 8973.391: 89.8607% ( 15) 00:09:01.844 8973.391 - 9023.803: 89.9060% ( 8) 00:09:01.844 9023.803 - 9074.215: 89.9457% ( 7) 00:09:01.844 9074.215 - 9124.628: 89.9853% ( 7) 00:09:01.844 9124.628 - 9175.040: 90.0136% ( 5) 00:09:01.844 9175.040 - 9225.452: 90.1098% ( 17) 00:09:01.844 9225.452 - 9275.865: 90.2004% ( 16) 00:09:01.844 9275.865 - 9326.277: 90.2627% ( 11) 00:09:01.844 9326.277 - 9376.689: 90.3476% ( 15) 00:09:01.844 9376.689 - 9427.102: 90.4155% ( 12) 00:09:01.844 9427.102 - 9477.514: 90.5231% ( 19) 00:09:01.844 9477.514 - 9527.926: 90.6024% ( 14) 00:09:01.844 9527.926 - 9578.338: 90.6363% ( 6) 00:09:01.844 9578.338 - 9628.751: 90.6476% ( 2) 00:09:01.844 9628.751 - 9679.163: 90.6533% ( 1) 00:09:01.844 9679.163 - 9729.575: 90.7156% ( 11) 00:09:01.844 9729.575 - 9779.988: 90.7665% ( 9) 00:09:01.844 9779.988 - 9830.400: 90.8175% ( 9) 00:09:01.844 9830.400 - 9880.812: 90.8854% ( 12) 00:09:01.844 9880.812 - 9931.225: 90.9420% ( 10) 00:09:01.844 9931.225 - 9981.637: 90.9986% ( 10) 00:09:01.844 9981.637 - 10032.049: 91.0553% ( 10) 00:09:01.844 10032.049 - 10082.462: 91.0836% ( 5) 00:09:01.844 10082.462 - 10132.874: 91.1232% ( 7) 00:09:01.844 10132.874 - 10183.286: 91.1685% ( 8) 00:09:01.844 10183.286 - 10233.698: 91.2477% ( 14) 00:09:01.844 10233.698 - 10284.111: 91.3496% ( 18) 00:09:01.844 10284.111 - 10334.523: 91.4346% ( 15) 00:09:01.844 10334.523 - 10384.935: 91.5195% ( 15) 00:09:01.844 10384.935 - 10435.348: 91.5704% ( 9) 00:09:01.844 10435.348 - 10485.760: 91.6497% ( 14) 00:09:01.844 10485.760 - 10536.172: 91.7176% ( 12) 00:09:01.844 10536.172 - 10586.585: 91.7856% ( 12) 00:09:01.845 10586.585 - 10636.997: 91.8761% ( 16) 00:09:01.845 10636.997 - 10687.409: 91.9497% ( 13) 00:09:01.845 10687.409 - 10737.822: 92.0233% ( 13) 00:09:01.845 10737.822 - 10788.234: 92.0856% ( 11) 00:09:01.845 10788.234 - 10838.646: 92.1592% ( 13) 00:09:01.845 10838.646 - 10889.058: 92.2328% ( 13) 00:09:01.845 10889.058 - 10939.471: 92.3404% ( 19) 00:09:01.845 10939.471 - 10989.883: 92.4592% ( 21) 00:09:01.845 10989.883 - 11040.295: 92.5611% ( 18) 00:09:01.845 11040.295 - 11090.708: 92.6630% ( 18) 00:09:01.845 11090.708 - 11141.120: 92.7819% ( 21) 00:09:01.845 11141.120 - 11191.532: 92.8668% ( 15) 00:09:01.845 11191.532 - 11241.945: 92.9461% ( 14) 00:09:01.845 11241.945 - 11292.357: 93.0423% ( 17) 00:09:01.845 11292.357 - 11342.769: 93.1442% ( 18) 00:09:01.845 11342.769 - 11393.182: 93.2348% ( 16) 00:09:01.845 11393.182 - 11443.594: 93.3537% ( 21) 00:09:01.845 11443.594 - 11494.006: 93.4839% ( 23) 00:09:01.845 11494.006 - 11544.418: 93.6085% ( 22) 00:09:01.845 11544.418 - 11594.831: 93.9085% ( 53) 00:09:01.845 11594.831 - 11645.243: 93.9595% ( 9) 00:09:01.845 11645.243 - 11695.655: 94.0104% ( 9) 00:09:01.845 11695.655 - 11746.068: 94.0500% ( 7) 00:09:01.845 11746.068 - 11796.480: 94.0953% ( 8) 00:09:01.845 11796.480 - 11846.892: 94.1350% ( 7) 00:09:01.845 11846.892 - 11897.305: 94.2029% ( 12) 00:09:01.845 11897.305 - 11947.717: 94.2765% ( 13) 00:09:01.845 11947.717 - 11998.129: 94.3388% ( 11) 00:09:01.845 11998.129 - 12048.542: 94.4407% ( 18) 00:09:01.845 12048.542 - 12098.954: 94.5539% ( 20) 00:09:01.845 12098.954 - 12149.366: 94.6445% ( 16) 00:09:01.845 12149.366 - 12199.778: 94.7577% ( 20) 00:09:01.845 12199.778 - 12250.191: 94.8483% ( 16) 00:09:01.845 12250.191 - 12300.603: 94.9389% ( 16) 00:09:01.845 12300.603 - 12351.015: 95.0351% ( 17) 00:09:01.845 12351.015 - 12401.428: 95.1427% ( 19) 00:09:01.845 12401.428 - 12451.840: 95.4257% ( 50) 00:09:01.845 12451.840 - 12502.252: 95.5899% ( 29) 00:09:01.845 12502.252 - 12552.665: 95.6805% ( 16) 00:09:01.845 12552.665 - 12603.077: 95.7541% ( 13) 00:09:01.845 12603.077 - 12653.489: 95.8333% ( 14) 00:09:01.845 12653.489 - 12703.902: 95.9183% ( 15) 00:09:01.845 12703.902 - 12754.314: 95.9975% ( 14) 00:09:01.845 12754.314 - 12804.726: 96.0711% ( 13) 00:09:01.845 12804.726 - 12855.138: 96.1560% ( 15) 00:09:01.845 12855.138 - 12905.551: 96.2353% ( 14) 00:09:01.845 12905.551 - 13006.375: 96.3598% ( 22) 00:09:01.845 13006.375 - 13107.200: 96.4731% ( 20) 00:09:01.845 13107.200 - 13208.025: 96.6259% ( 27) 00:09:01.845 13208.025 - 13308.849: 96.8014% ( 31) 00:09:01.845 13308.849 - 13409.674: 97.0222% ( 39) 00:09:01.845 13409.674 - 13510.498: 97.2147% ( 34) 00:09:01.845 13510.498 - 13611.323: 97.4638% ( 44) 00:09:01.845 13611.323 - 13712.148: 97.7129% ( 44) 00:09:01.845 13712.148 - 13812.972: 97.9620% ( 44) 00:09:01.845 13812.972 - 13913.797: 98.1771% ( 38) 00:09:01.845 13913.797 - 14014.622: 98.3016% ( 22) 00:09:01.845 14014.622 - 14115.446: 98.4715% ( 30) 00:09:01.845 14115.446 - 14216.271: 98.6583% ( 33) 00:09:01.845 14216.271 - 14317.095: 98.7659% ( 19) 00:09:01.845 14317.095 - 14417.920: 98.8847% ( 21) 00:09:01.845 14417.920 - 14518.745: 98.9753% ( 16) 00:09:01.845 14518.745 - 14619.569: 99.0602% ( 15) 00:09:01.845 14619.569 - 14720.394: 99.1565% ( 17) 00:09:01.845 14720.394 - 14821.218: 99.2527% ( 17) 00:09:01.845 14821.218 - 14922.043: 99.3433% ( 16) 00:09:01.845 14922.043 - 15022.868: 99.4622% ( 21) 00:09:01.845 15022.868 - 15123.692: 99.6037% ( 25) 00:09:01.845 15123.692 - 15224.517: 99.8132% ( 37) 00:09:01.845 15224.517 - 15325.342: 99.8528% ( 7) 00:09:01.845 15325.342 - 15426.166: 99.8868% ( 6) 00:09:01.845 15426.166 - 15526.991: 99.9151% ( 5) 00:09:01.845 15526.991 - 15627.815: 99.9490% ( 6) 00:09:01.845 15627.815 - 15728.640: 99.9717% ( 4) 00:09:01.845 15728.640 - 15829.465: 99.9887% ( 3) 00:09:01.845 15829.465 - 15930.289: 100.0000% ( 2) 00:09:01.845 00:09:01.845 10:36:32 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:01.845 00:09:01.845 real 0m2.611s 00:09:01.845 user 0m2.316s 00:09:01.845 sys 0m0.201s 00:09:01.845 10:36:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:01.845 10:36:32 -- common/autotest_common.sh@10 -- # set +x 00:09:01.845 ************************************ 00:09:01.845 END TEST nvme_perf 00:09:01.845 ************************************ 00:09:01.845 10:36:32 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:01.845 10:36:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:01.845 10:36:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.845 10:36:32 -- common/autotest_common.sh@10 -- # set +x 00:09:01.845 ************************************ 00:09:01.845 START TEST nvme_hello_world 00:09:01.845 ************************************ 00:09:01.845 10:36:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:02.104 Initializing NVMe Controllers 00:09:02.104 Attached to 0000:00:06.0 00:09:02.104 Namespace ID: 1 size: 6GB 00:09:02.104 Attached to 0000:00:07.0 00:09:02.104 Namespace ID: 1 size: 5GB 00:09:02.104 Attached to 0000:00:09.0 00:09:02.104 Namespace ID: 1 size: 1GB 00:09:02.104 Attached to 0000:00:08.0 00:09:02.104 Namespace ID: 1 size: 4GB 00:09:02.104 Namespace ID: 2 size: 4GB 00:09:02.104 Namespace ID: 3 size: 4GB 00:09:02.104 Initialization complete. 00:09:02.104 INFO: using host memory buffer for IO 00:09:02.104 Hello world! 00:09:02.104 INFO: using host memory buffer for IO 00:09:02.104 Hello world! 00:09:02.104 INFO: using host memory buffer for IO 00:09:02.104 Hello world! 00:09:02.104 INFO: using host memory buffer for IO 00:09:02.104 Hello world! 00:09:02.104 INFO: using host memory buffer for IO 00:09:02.104 Hello world! 00:09:02.104 INFO: using host memory buffer for IO 00:09:02.104 Hello world! 00:09:02.104 00:09:02.104 real 0m0.296s 00:09:02.104 user 0m0.129s 00:09:02.104 sys 0m0.102s 00:09:02.104 10:36:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.104 ************************************ 00:09:02.104 END TEST nvme_hello_world 00:09:02.104 ************************************ 00:09:02.104 10:36:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.104 10:36:32 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:02.104 10:36:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:02.104 10:36:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.104 10:36:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.104 ************************************ 00:09:02.104 START TEST nvme_sgl 00:09:02.104 ************************************ 00:09:02.104 10:36:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:02.363 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:02.363 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:02.363 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:02.363 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:02.363 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:02.363 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:02.363 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:02.363 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:02.363 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:02.363 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:02.363 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:02.363 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:02.363 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:02.363 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:02.363 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:02.363 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:02.363 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:02.363 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:02.363 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:02.363 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:02.363 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:02.363 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:02.363 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:02.363 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:02.363 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:02.363 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:02.363 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:02.363 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:02.363 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:02.363 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:02.363 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:02.363 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:02.363 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:02.363 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:02.363 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:02.363 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:02.363 NVMe Readv/Writev Request test 00:09:02.363 Attached to 0000:00:06.0 00:09:02.363 Attached to 0000:00:07.0 00:09:02.363 Attached to 0000:00:09.0 00:09:02.363 Attached to 0000:00:08.0 00:09:02.363 0000:00:06.0: build_io_request_2 test passed 00:09:02.363 0000:00:06.0: build_io_request_4 test passed 00:09:02.363 0000:00:06.0: build_io_request_5 test passed 00:09:02.363 0000:00:06.0: build_io_request_6 test passed 00:09:02.363 0000:00:06.0: build_io_request_7 test passed 00:09:02.363 0000:00:06.0: build_io_request_10 test passed 00:09:02.363 0000:00:07.0: build_io_request_2 test passed 00:09:02.363 0000:00:07.0: build_io_request_4 test passed 00:09:02.363 0000:00:07.0: build_io_request_5 test passed 00:09:02.363 0000:00:07.0: build_io_request_6 test passed 00:09:02.363 0000:00:07.0: build_io_request_7 test passed 00:09:02.363 0000:00:07.0: build_io_request_10 test passed 00:09:02.363 Cleaning up... 00:09:02.363 00:09:02.363 real 0m0.398s 00:09:02.363 user 0m0.242s 00:09:02.363 sys 0m0.102s 00:09:02.363 10:36:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.363 10:36:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.363 ************************************ 00:09:02.363 END TEST nvme_sgl 00:09:02.363 ************************************ 00:09:02.622 10:36:32 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:02.622 10:36:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:02.622 10:36:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.622 10:36:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.622 ************************************ 00:09:02.622 START TEST nvme_e2edp 00:09:02.622 ************************************ 00:09:02.622 10:36:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:02.622 NVMe Write/Read with End-to-End data protection test 00:09:02.622 Attached to 0000:00:06.0 00:09:02.622 Attached to 0000:00:07.0 00:09:02.622 Attached to 0000:00:09.0 00:09:02.622 Attached to 0000:00:08.0 00:09:02.622 Cleaning up... 00:09:02.622 00:09:02.622 real 0m0.202s 00:09:02.622 user 0m0.066s 00:09:02.622 sys 0m0.095s 00:09:02.622 ************************************ 00:09:02.622 END TEST nvme_e2edp 00:09:02.622 ************************************ 00:09:02.622 10:36:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.622 10:36:33 -- common/autotest_common.sh@10 -- # set +x 00:09:02.622 10:36:33 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:02.622 10:36:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:02.622 10:36:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.622 10:36:33 -- common/autotest_common.sh@10 -- # set +x 00:09:02.622 ************************************ 00:09:02.622 START TEST nvme_reserve 00:09:02.622 ************************************ 00:09:02.622 10:36:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:02.880 ===================================================== 00:09:02.880 NVMe Controller at PCI bus 0, device 6, function 0 00:09:02.880 ===================================================== 00:09:02.880 Reservations: Not Supported 00:09:02.880 ===================================================== 00:09:02.880 NVMe Controller at PCI bus 0, device 7, function 0 00:09:02.880 ===================================================== 00:09:02.880 Reservations: Not Supported 00:09:02.880 ===================================================== 00:09:02.880 NVMe Controller at PCI bus 0, device 9, function 0 00:09:02.880 ===================================================== 00:09:02.880 Reservations: Not Supported 00:09:02.880 ===================================================== 00:09:02.880 NVMe Controller at PCI bus 0, device 8, function 0 00:09:02.880 ===================================================== 00:09:02.880 Reservations: Not Supported 00:09:02.880 Reservation test passed 00:09:02.880 00:09:02.880 real 0m0.211s 00:09:02.880 user 0m0.063s 00:09:02.880 sys 0m0.095s 00:09:02.880 ************************************ 00:09:02.880 END TEST nvme_reserve 00:09:02.880 ************************************ 00:09:02.880 10:36:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.880 10:36:33 -- common/autotest_common.sh@10 -- # set +x 00:09:02.880 10:36:33 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:02.880 10:36:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:02.880 10:36:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.880 10:36:33 -- common/autotest_common.sh@10 -- # set +x 00:09:02.880 ************************************ 00:09:02.880 START TEST nvme_err_injection 00:09:02.880 ************************************ 00:09:02.880 10:36:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:03.138 NVMe Error Injection test 00:09:03.138 Attached to 0000:00:06.0 00:09:03.138 Attached to 0000:00:07.0 00:09:03.138 Attached to 0000:00:09.0 00:09:03.138 Attached to 0000:00:08.0 00:09:03.138 0000:00:06.0: get features failed as expected 00:09:03.138 0000:00:07.0: get features failed as expected 00:09:03.138 0000:00:09.0: get features failed as expected 00:09:03.138 0000:00:08.0: get features failed as expected 00:09:03.138 0000:00:06.0: get features successfully as expected 00:09:03.138 0000:00:07.0: get features successfully as expected 00:09:03.138 0000:00:09.0: get features successfully as expected 00:09:03.138 0000:00:08.0: get features successfully as expected 00:09:03.138 0000:00:06.0: read failed as expected 00:09:03.138 0000:00:07.0: read failed as expected 00:09:03.138 0000:00:09.0: read failed as expected 00:09:03.138 0000:00:08.0: read failed as expected 00:09:03.138 0000:00:06.0: read successfully as expected 00:09:03.138 0000:00:07.0: read successfully as expected 00:09:03.138 0000:00:09.0: read successfully as expected 00:09:03.138 0000:00:08.0: read successfully as expected 00:09:03.138 Cleaning up... 00:09:03.138 00:09:03.138 real 0m0.258s 00:09:03.139 user 0m0.108s 00:09:03.139 sys 0m0.106s 00:09:03.139 ************************************ 00:09:03.139 END TEST nvme_err_injection 00:09:03.139 ************************************ 00:09:03.139 10:36:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:03.139 10:36:33 -- common/autotest_common.sh@10 -- # set +x 00:09:03.396 10:36:33 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:03.397 10:36:33 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:03.397 10:36:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.397 10:36:33 -- common/autotest_common.sh@10 -- # set +x 00:09:03.397 ************************************ 00:09:03.397 START TEST nvme_overhead 00:09:03.397 ************************************ 00:09:03.397 10:36:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:04.771 Initializing NVMe Controllers 00:09:04.771 Attached to 0000:00:06.0 00:09:04.771 Attached to 0000:00:07.0 00:09:04.771 Attached to 0000:00:09.0 00:09:04.771 Attached to 0000:00:08.0 00:09:04.771 Initialization complete. Launching workers. 00:09:04.771 submit (in ns) avg, min, max = 12163.8, 11320.0, 255841.5 00:09:04.771 complete (in ns) avg, min, max = 8082.1, 7222.3, 237486.9 00:09:04.771 00:09:04.771 Submit histogram 00:09:04.771 ================ 00:09:04.771 Range in us Cumulative Count 00:09:04.771 11.274 - 11.323: 0.0066% ( 1) 00:09:04.771 11.422 - 11.471: 0.0198% ( 2) 00:09:04.771 11.471 - 11.520: 0.1586% ( 21) 00:09:04.771 11.520 - 11.569: 0.6673% ( 77) 00:09:04.771 11.569 - 11.618: 1.7442% ( 163) 00:09:04.771 11.618 - 11.668: 4.3473% ( 394) 00:09:04.771 11.668 - 11.717: 9.1504% ( 727) 00:09:04.771 11.717 - 11.766: 15.5854% ( 974) 00:09:04.771 11.766 - 11.815: 23.6654% ( 1223) 00:09:04.771 11.815 - 11.865: 32.8687% ( 1393) 00:09:04.771 11.865 - 11.914: 42.5211% ( 1461) 00:09:04.771 11.914 - 11.963: 51.4931% ( 1358) 00:09:04.771 11.963 - 12.012: 59.7053% ( 1243) 00:09:04.771 12.012 - 12.062: 66.8473% ( 1081) 00:09:04.771 12.062 - 12.111: 72.5687% ( 866) 00:09:04.771 12.111 - 12.160: 76.7442% ( 632) 00:09:04.771 12.160 - 12.209: 80.1467% ( 515) 00:09:04.771 12.209 - 12.258: 82.8753% ( 413) 00:09:04.771 12.258 - 12.308: 85.1612% ( 346) 00:09:04.771 12.308 - 12.357: 86.8591% ( 257) 00:09:04.771 12.357 - 12.406: 88.4910% ( 247) 00:09:04.771 12.406 - 12.455: 89.7001% ( 183) 00:09:04.771 12.455 - 12.505: 90.8562% ( 175) 00:09:04.771 12.505 - 12.554: 91.7812% ( 140) 00:09:04.771 12.554 - 12.603: 92.7722% ( 150) 00:09:04.771 12.603 - 12.702: 94.4437% ( 253) 00:09:04.771 12.702 - 12.800: 95.4347% ( 150) 00:09:04.771 12.800 - 12.898: 96.1483% ( 108) 00:09:04.771 12.898 - 12.997: 96.5248% ( 57) 00:09:04.771 12.997 - 13.095: 96.8221% ( 45) 00:09:04.771 13.095 - 13.194: 96.9939% ( 26) 00:09:04.771 13.194 - 13.292: 97.1327% ( 21) 00:09:04.771 13.292 - 13.391: 97.1789% ( 7) 00:09:04.771 13.391 - 13.489: 97.2252% ( 7) 00:09:04.771 13.489 - 13.588: 97.2648% ( 6) 00:09:04.771 13.588 - 13.686: 97.3044% ( 6) 00:09:04.771 13.686 - 13.785: 97.3573% ( 8) 00:09:04.771 13.785 - 13.883: 97.4300% ( 11) 00:09:04.771 13.883 - 13.982: 97.5159% ( 13) 00:09:04.771 13.982 - 14.080: 97.6546% ( 21) 00:09:04.771 14.080 - 14.178: 97.7603% ( 16) 00:09:04.771 14.178 - 14.277: 97.8726% ( 17) 00:09:04.771 14.277 - 14.375: 97.9387% ( 10) 00:09:04.771 14.375 - 14.474: 98.0378% ( 15) 00:09:04.771 14.474 - 14.572: 98.1171% ( 12) 00:09:04.771 14.572 - 14.671: 98.1699% ( 8) 00:09:04.771 14.671 - 14.769: 98.2294% ( 9) 00:09:04.771 14.769 - 14.868: 98.2756% ( 7) 00:09:04.771 14.868 - 14.966: 98.3021% ( 4) 00:09:04.771 14.966 - 15.065: 98.3219% ( 3) 00:09:04.771 15.065 - 15.163: 98.3417% ( 3) 00:09:04.771 15.163 - 15.262: 98.3483% ( 1) 00:09:04.771 15.262 - 15.360: 98.3747% ( 4) 00:09:04.771 15.458 - 15.557: 98.3813% ( 1) 00:09:04.771 15.557 - 15.655: 98.3879% ( 1) 00:09:04.772 15.754 - 15.852: 98.3946% ( 1) 00:09:04.772 15.852 - 15.951: 98.4078% ( 2) 00:09:04.772 15.951 - 16.049: 98.4474% ( 6) 00:09:04.772 16.049 - 16.148: 98.4606% ( 2) 00:09:04.772 16.148 - 16.246: 98.4871% ( 4) 00:09:04.772 16.246 - 16.345: 98.5003% ( 2) 00:09:04.772 16.345 - 16.443: 98.5201% ( 3) 00:09:04.772 16.443 - 16.542: 98.5333% ( 2) 00:09:04.772 16.640 - 16.738: 98.5399% ( 1) 00:09:04.772 16.738 - 16.837: 98.5663% ( 4) 00:09:04.772 16.837 - 16.935: 98.5928% ( 4) 00:09:04.772 16.935 - 17.034: 98.6060% ( 2) 00:09:04.772 17.034 - 17.132: 98.6324% ( 4) 00:09:04.772 17.132 - 17.231: 98.6588% ( 4) 00:09:04.772 17.231 - 17.329: 98.6654% ( 1) 00:09:04.772 17.329 - 17.428: 98.6720% ( 1) 00:09:04.772 17.428 - 17.526: 98.6985% ( 4) 00:09:04.772 17.526 - 17.625: 98.7579% ( 9) 00:09:04.772 17.625 - 17.723: 98.8108% ( 8) 00:09:04.772 17.723 - 17.822: 98.8504% ( 6) 00:09:04.772 17.822 - 17.920: 98.9033% ( 8) 00:09:04.772 17.920 - 18.018: 99.0354% ( 20) 00:09:04.772 18.018 - 18.117: 99.1147% ( 12) 00:09:04.772 18.117 - 18.215: 99.1742% ( 9) 00:09:04.772 18.215 - 18.314: 99.2799% ( 16) 00:09:04.772 18.314 - 18.412: 99.3591% ( 12) 00:09:04.772 18.412 - 18.511: 99.4054% ( 7) 00:09:04.772 18.511 - 18.609: 99.4450% ( 6) 00:09:04.772 18.609 - 18.708: 99.4715% ( 4) 00:09:04.772 18.708 - 18.806: 99.5309% ( 9) 00:09:04.772 18.806 - 18.905: 99.5706% ( 6) 00:09:04.772 18.905 - 19.003: 99.6366% ( 10) 00:09:04.772 19.003 - 19.102: 99.6498% ( 2) 00:09:04.772 19.102 - 19.200: 99.6895% ( 6) 00:09:04.772 19.200 - 19.298: 99.6961% ( 1) 00:09:04.772 19.397 - 19.495: 99.7225% ( 4) 00:09:04.772 19.594 - 19.692: 99.7357% ( 2) 00:09:04.772 19.692 - 19.791: 99.7489% ( 2) 00:09:04.772 19.791 - 19.889: 99.7555% ( 1) 00:09:04.772 19.889 - 19.988: 99.7622% ( 1) 00:09:04.772 19.988 - 20.086: 99.7688% ( 1) 00:09:04.772 20.086 - 20.185: 99.7886% ( 3) 00:09:04.772 20.185 - 20.283: 99.8018% ( 2) 00:09:04.772 20.283 - 20.382: 99.8150% ( 2) 00:09:04.772 20.480 - 20.578: 99.8282% ( 2) 00:09:04.772 20.578 - 20.677: 99.8348% ( 1) 00:09:04.772 20.677 - 20.775: 99.8414% ( 1) 00:09:04.772 20.972 - 21.071: 99.8480% ( 1) 00:09:04.772 21.366 - 21.465: 99.8547% ( 1) 00:09:04.772 21.563 - 21.662: 99.8613% ( 1) 00:09:04.772 21.662 - 21.760: 99.8745% ( 2) 00:09:04.772 21.760 - 21.858: 99.8811% ( 1) 00:09:04.772 21.957 - 22.055: 99.8943% ( 2) 00:09:04.772 22.055 - 22.154: 99.9009% ( 1) 00:09:04.772 22.449 - 22.548: 99.9075% ( 1) 00:09:04.772 22.646 - 22.745: 99.9141% ( 1) 00:09:04.772 22.942 - 23.040: 99.9207% ( 1) 00:09:04.772 25.797 - 25.994: 99.9273% ( 1) 00:09:04.772 27.569 - 27.766: 99.9339% ( 1) 00:09:04.772 30.129 - 30.326: 99.9405% ( 1) 00:09:04.772 31.114 - 31.311: 99.9471% ( 1) 00:09:04.772 33.280 - 33.477: 99.9538% ( 1) 00:09:04.772 34.855 - 35.052: 99.9604% ( 1) 00:09:04.772 39.975 - 40.172: 99.9670% ( 1) 00:09:04.772 40.960 - 41.157: 99.9736% ( 1) 00:09:04.772 45.095 - 45.292: 99.9802% ( 1) 00:09:04.772 47.065 - 47.262: 99.9868% ( 1) 00:09:04.772 83.495 - 83.889: 99.9934% ( 1) 00:09:04.772 255.212 - 256.788: 100.0000% ( 1) 00:09:04.772 00:09:04.772 Complete histogram 00:09:04.772 ================== 00:09:04.772 Range in us Cumulative Count 00:09:04.772 7.188 - 7.237: 0.0066% ( 1) 00:09:04.772 7.286 - 7.335: 0.1718% ( 25) 00:09:04.772 7.335 - 7.385: 1.5196% ( 204) 00:09:04.772 7.385 - 7.434: 4.4992% ( 451) 00:09:04.772 7.434 - 7.483: 8.1329% ( 550) 00:09:04.772 7.483 - 7.532: 11.0267% ( 438) 00:09:04.772 7.532 - 7.582: 12.7907% ( 267) 00:09:04.772 7.582 - 7.631: 13.9601% ( 177) 00:09:04.772 7.631 - 7.680: 14.5151% ( 84) 00:09:04.772 7.680 - 7.729: 14.8058% ( 44) 00:09:04.772 7.729 - 7.778: 15.3475% ( 82) 00:09:04.772 7.778 - 7.828: 20.2960% ( 749) 00:09:04.772 7.828 - 7.877: 31.0716% ( 1631) 00:09:04.772 7.877 - 7.926: 37.6916% ( 1002) 00:09:04.772 7.926 - 7.975: 46.1615% ( 1282) 00:09:04.772 7.975 - 8.025: 60.5180% ( 2173) 00:09:04.772 8.025 - 8.074: 73.4012% ( 1950) 00:09:04.772 8.074 - 8.123: 81.7984% ( 1271) 00:09:04.772 8.123 - 8.172: 87.4009% ( 848) 00:09:04.772 8.172 - 8.222: 91.2659% ( 585) 00:09:04.772 8.222 - 8.271: 93.6971% ( 368) 00:09:04.772 8.271 - 8.320: 95.4612% ( 267) 00:09:04.772 8.320 - 8.369: 96.5314% ( 162) 00:09:04.772 8.369 - 8.418: 97.1591% ( 95) 00:09:04.772 8.418 - 8.468: 97.5819% ( 64) 00:09:04.772 8.468 - 8.517: 97.9321% ( 53) 00:09:04.772 8.517 - 8.566: 98.0444% ( 17) 00:09:04.772 8.566 - 8.615: 98.1171% ( 11) 00:09:04.772 8.615 - 8.665: 98.1765% ( 9) 00:09:04.772 8.665 - 8.714: 98.2228% ( 7) 00:09:04.772 8.714 - 8.763: 98.2426% ( 3) 00:09:04.772 8.812 - 8.862: 98.2558% ( 2) 00:09:04.772 8.862 - 8.911: 98.2624% ( 1) 00:09:04.772 8.911 - 8.960: 98.2690% ( 1) 00:09:04.772 9.058 - 9.108: 98.2756% ( 1) 00:09:04.772 9.157 - 9.206: 98.2822% ( 1) 00:09:04.772 9.206 - 9.255: 98.2888% ( 1) 00:09:04.772 9.255 - 9.305: 98.3087% ( 3) 00:09:04.772 9.305 - 9.354: 98.3153% ( 1) 00:09:04.772 9.403 - 9.452: 98.3219% ( 1) 00:09:04.772 9.452 - 9.502: 98.3285% ( 1) 00:09:04.772 9.502 - 9.551: 98.3351% ( 1) 00:09:04.772 9.551 - 9.600: 98.3417% ( 1) 00:09:04.772 9.600 - 9.649: 98.3483% ( 1) 00:09:04.772 9.649 - 9.698: 98.3747% ( 4) 00:09:04.772 9.748 - 9.797: 98.3813% ( 1) 00:09:04.772 9.797 - 9.846: 98.3946% ( 2) 00:09:04.772 9.846 - 9.895: 98.4012% ( 1) 00:09:04.772 9.895 - 9.945: 98.4078% ( 1) 00:09:04.772 9.945 - 9.994: 98.4210% ( 2) 00:09:04.772 9.994 - 10.043: 98.4276% ( 1) 00:09:04.772 10.191 - 10.240: 98.4408% ( 2) 00:09:04.772 10.289 - 10.338: 98.4474% ( 1) 00:09:04.772 10.388 - 10.437: 98.4540% ( 1) 00:09:04.772 10.978 - 11.028: 98.4606% ( 1) 00:09:04.772 11.225 - 11.274: 98.4672% ( 1) 00:09:04.772 11.323 - 11.372: 98.4738% ( 1) 00:09:04.772 11.422 - 11.471: 98.4871% ( 2) 00:09:04.772 11.569 - 11.618: 98.4937% ( 1) 00:09:04.772 11.618 - 11.668: 98.5003% ( 1) 00:09:04.772 11.914 - 11.963: 98.5069% ( 1) 00:09:04.772 12.012 - 12.062: 98.5333% ( 4) 00:09:04.772 12.209 - 12.258: 98.5399% ( 1) 00:09:04.772 12.258 - 12.308: 98.5465% ( 1) 00:09:04.772 12.357 - 12.406: 98.5597% ( 2) 00:09:04.772 12.554 - 12.603: 98.5729% ( 2) 00:09:04.772 12.603 - 12.702: 98.5862% ( 2) 00:09:04.772 12.702 - 12.800: 98.5928% ( 1) 00:09:04.772 12.898 - 12.997: 98.5994% ( 1) 00:09:04.772 12.997 - 13.095: 98.6060% ( 1) 00:09:04.772 13.095 - 13.194: 98.6126% ( 1) 00:09:04.772 13.194 - 13.292: 98.6258% ( 2) 00:09:04.772 13.292 - 13.391: 98.6456% ( 3) 00:09:04.772 13.391 - 13.489: 98.6720% ( 4) 00:09:04.772 13.489 - 13.588: 98.7051% ( 5) 00:09:04.772 13.588 - 13.686: 98.7447% ( 6) 00:09:04.772 13.686 - 13.785: 98.7777% ( 5) 00:09:04.772 13.785 - 13.883: 98.8438% ( 10) 00:09:04.772 13.883 - 13.982: 98.9561% ( 17) 00:09:04.772 13.982 - 14.080: 99.0354% ( 12) 00:09:04.772 14.080 - 14.178: 99.1213% ( 13) 00:09:04.772 14.178 - 14.277: 99.2006% ( 12) 00:09:04.772 14.277 - 14.375: 99.2733% ( 11) 00:09:04.772 14.375 - 14.474: 99.3856% ( 17) 00:09:04.772 14.474 - 14.572: 99.4847% ( 15) 00:09:04.772 14.572 - 14.671: 99.5309% ( 7) 00:09:04.772 14.671 - 14.769: 99.5640% ( 5) 00:09:04.772 14.769 - 14.868: 99.5970% ( 5) 00:09:04.772 14.868 - 14.966: 99.6498% ( 8) 00:09:04.772 14.966 - 15.065: 99.6631% ( 2) 00:09:04.772 15.065 - 15.163: 99.7159% ( 8) 00:09:04.772 15.163 - 15.262: 99.7225% ( 1) 00:09:04.772 15.262 - 15.360: 99.7291% ( 1) 00:09:04.772 15.360 - 15.458: 99.7423% ( 2) 00:09:04.772 15.458 - 15.557: 99.7622% ( 3) 00:09:04.772 15.557 - 15.655: 99.7820% ( 3) 00:09:04.772 15.655 - 15.754: 99.8018% ( 3) 00:09:04.772 15.852 - 15.951: 99.8150% ( 2) 00:09:04.772 16.049 - 16.148: 99.8216% ( 1) 00:09:04.772 16.148 - 16.246: 99.8282% ( 1) 00:09:04.772 16.345 - 16.443: 99.8480% ( 3) 00:09:04.772 16.443 - 16.542: 99.8547% ( 1) 00:09:04.772 17.034 - 17.132: 99.8613% ( 1) 00:09:04.772 17.231 - 17.329: 99.8745% ( 2) 00:09:04.772 17.329 - 17.428: 99.8811% ( 1) 00:09:04.772 17.428 - 17.526: 99.8877% ( 1) 00:09:04.772 17.526 - 17.625: 99.8943% ( 1) 00:09:04.772 17.625 - 17.723: 99.9009% ( 1) 00:09:04.772 18.215 - 18.314: 99.9141% ( 2) 00:09:04.772 18.412 - 18.511: 99.9207% ( 1) 00:09:04.772 18.609 - 18.708: 99.9273% ( 1) 00:09:04.772 19.397 - 19.495: 99.9339% ( 1) 00:09:04.772 20.480 - 20.578: 99.9405% ( 1) 00:09:04.772 20.775 - 20.874: 99.9471% ( 1) 00:09:04.772 32.689 - 32.886: 99.9538% ( 1) 00:09:04.772 40.763 - 40.960: 99.9604% ( 1) 00:09:04.772 67.742 - 68.135: 99.9670% ( 1) 00:09:04.772 71.680 - 72.074: 99.9736% ( 1) 00:09:04.772 76.012 - 76.406: 99.9802% ( 1) 00:09:04.772 76.406 - 76.800: 99.9868% ( 1) 00:09:04.772 129.182 - 129.969: 99.9934% ( 1) 00:09:04.772 236.308 - 237.883: 100.0000% ( 1) 00:09:04.772 00:09:04.772 ************************************ 00:09:04.772 END TEST nvme_overhead 00:09:04.772 ************************************ 00:09:04.772 00:09:04.772 real 0m1.224s 00:09:04.772 user 0m1.073s 00:09:04.772 sys 0m0.100s 00:09:04.772 10:36:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:04.772 10:36:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.772 10:36:35 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:04.772 10:36:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:04.772 10:36:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.772 10:36:35 -- common/autotest_common.sh@10 -- # set +x 00:09:04.773 ************************************ 00:09:04.773 START TEST nvme_arbitration 00:09:04.773 ************************************ 00:09:04.773 10:36:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:08.065 Initializing NVMe Controllers 00:09:08.065 Attached to 0000:00:06.0 00:09:08.065 Attached to 0000:00:07.0 00:09:08.065 Attached to 0000:00:09.0 00:09:08.065 Attached to 0000:00:08.0 00:09:08.065 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:08.065 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:08.065 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:08.065 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:08.065 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:08.065 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:08.065 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:08.065 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:08.065 Initialization complete. Launching workers. 00:09:08.065 Starting thread on core 1 with urgent priority queue 00:09:08.065 Starting thread on core 2 with urgent priority queue 00:09:08.065 Starting thread on core 3 with urgent priority queue 00:09:08.065 Starting thread on core 0 with urgent priority queue 00:09:08.065 QEMU NVMe Ctrl (12340 ) core 0: 1045.33 IO/s 95.66 secs/100000 ios 00:09:08.065 QEMU NVMe Ctrl (12342 ) core 0: 1045.33 IO/s 95.66 secs/100000 ios 00:09:08.065 QEMU NVMe Ctrl (12341 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:09:08.065 QEMU NVMe Ctrl (12342 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:09:08.065 QEMU NVMe Ctrl (12343 ) core 2: 789.33 IO/s 126.69 secs/100000 ios 00:09:08.065 QEMU NVMe Ctrl (12342 ) core 3: 832.00 IO/s 120.19 secs/100000 ios 00:09:08.065 ======================================================== 00:09:08.065 00:09:08.065 00:09:08.065 real 0m3.416s 00:09:08.065 user 0m9.503s 00:09:08.065 sys 0m0.109s 00:09:08.065 ************************************ 00:09:08.065 END TEST nvme_arbitration 00:09:08.065 ************************************ 00:09:08.065 10:36:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:08.065 10:36:38 -- common/autotest_common.sh@10 -- # set +x 00:09:08.065 10:36:38 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:08.065 10:36:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:08.065 10:36:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.065 10:36:38 -- common/autotest_common.sh@10 -- # set +x 00:09:08.065 ************************************ 00:09:08.065 START TEST nvme_single_aen 00:09:08.065 ************************************ 00:09:08.065 10:36:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:08.065 [2024-12-03 10:36:38.522687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:08.065 [2024-12-03 10:36:38.522848] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.323 [2024-12-03 10:36:38.665140] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:08.323 [2024-12-03 10:36:38.666669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:08.323 [2024-12-03 10:36:38.667773] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:08.323 [2024-12-03 10:36:38.669031] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:08.323 Asynchronous Event Request test 00:09:08.323 Attached to 0000:00:06.0 00:09:08.323 Attached to 0000:00:07.0 00:09:08.323 Attached to 0000:00:09.0 00:09:08.323 Attached to 0000:00:08.0 00:09:08.323 Reset controller to setup AER completions for this process 00:09:08.323 Registering asynchronous event callbacks... 00:09:08.323 Getting orig temperature thresholds of all controllers 00:09:08.323 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:08.323 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:08.323 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:08.323 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:08.323 Setting all controllers temperature threshold low to trigger AER 00:09:08.323 Waiting for all controllers temperature threshold to be set lower 00:09:08.323 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:08.323 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:08.323 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:08.323 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:08.323 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:08.323 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:08.323 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:08.323 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:08.323 Waiting for all controllers to trigger AER and reset threshold 00:09:08.323 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:08.323 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:08.323 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:08.323 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:08.323 Cleaning up... 00:09:08.323 00:09:08.323 real 0m0.211s 00:09:08.323 user 0m0.060s 00:09:08.323 sys 0m0.100s 00:09:08.323 10:36:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:08.323 10:36:38 -- common/autotest_common.sh@10 -- # set +x 00:09:08.323 ************************************ 00:09:08.323 END TEST nvme_single_aen 00:09:08.323 ************************************ 00:09:08.323 10:36:38 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:08.323 10:36:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:08.323 10:36:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.323 10:36:38 -- common/autotest_common.sh@10 -- # set +x 00:09:08.323 ************************************ 00:09:08.323 START TEST nvme_doorbell_aers 00:09:08.323 ************************************ 00:09:08.323 10:36:38 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:09:08.323 10:36:38 -- nvme/nvme.sh@70 -- # bdfs=() 00:09:08.323 10:36:38 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:08.323 10:36:38 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:08.323 10:36:38 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:08.323 10:36:38 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:08.323 10:36:38 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:08.323 10:36:38 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:08.323 10:36:38 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:08.323 10:36:38 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:08.323 10:36:38 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:08.323 10:36:38 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:08.324 10:36:38 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:08.324 10:36:38 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:08.581 [2024-12-03 10:36:38.987452] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:18.550 Executing: test_write_invalid_db 00:09:18.550 Waiting for AER completion... 00:09:18.550 Failure: test_write_invalid_db 00:09:18.550 00:09:18.550 Executing: test_invalid_db_write_overflow_sq 00:09:18.550 Waiting for AER completion... 00:09:18.550 Failure: test_invalid_db_write_overflow_sq 00:09:18.550 00:09:18.550 Executing: test_invalid_db_write_overflow_cq 00:09:18.550 Waiting for AER completion... 00:09:18.550 Failure: test_invalid_db_write_overflow_cq 00:09:18.550 00:09:18.550 10:36:48 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:18.550 10:36:48 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:18.550 [2024-12-03 10:36:49.037644] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:28.519 Executing: test_write_invalid_db 00:09:28.519 Waiting for AER completion... 00:09:28.519 Failure: test_write_invalid_db 00:09:28.519 00:09:28.519 Executing: test_invalid_db_write_overflow_sq 00:09:28.519 Waiting for AER completion... 00:09:28.519 Failure: test_invalid_db_write_overflow_sq 00:09:28.519 00:09:28.519 Executing: test_invalid_db_write_overflow_cq 00:09:28.519 Waiting for AER completion... 00:09:28.519 Failure: test_invalid_db_write_overflow_cq 00:09:28.519 00:09:28.519 10:36:58 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:28.519 10:36:58 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:09:28.519 [2024-12-03 10:36:59.061240] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:38.509 Executing: test_write_invalid_db 00:09:38.509 Waiting for AER completion... 00:09:38.509 Failure: test_write_invalid_db 00:09:38.509 00:09:38.509 Executing: test_invalid_db_write_overflow_sq 00:09:38.509 Waiting for AER completion... 00:09:38.509 Failure: test_invalid_db_write_overflow_sq 00:09:38.509 00:09:38.509 Executing: test_invalid_db_write_overflow_cq 00:09:38.509 Waiting for AER completion... 00:09:38.509 Failure: test_invalid_db_write_overflow_cq 00:09:38.509 00:09:38.509 10:37:08 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:38.509 10:37:08 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:09:38.509 [2024-12-03 10:37:09.100913] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:48.475 Executing: test_write_invalid_db 00:09:48.475 Waiting for AER completion... 00:09:48.475 Failure: test_write_invalid_db 00:09:48.475 00:09:48.475 Executing: test_invalid_db_write_overflow_sq 00:09:48.475 Waiting for AER completion... 00:09:48.475 Failure: test_invalid_db_write_overflow_sq 00:09:48.475 00:09:48.475 Executing: test_invalid_db_write_overflow_cq 00:09:48.475 Waiting for AER completion... 00:09:48.475 Failure: test_invalid_db_write_overflow_cq 00:09:48.475 00:09:48.475 00:09:48.475 real 0m40.194s 00:09:48.475 user 0m34.115s 00:09:48.475 sys 0m5.681s 00:09:48.475 10:37:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:48.475 10:37:18 -- common/autotest_common.sh@10 -- # set +x 00:09:48.475 ************************************ 00:09:48.475 END TEST nvme_doorbell_aers 00:09:48.475 ************************************ 00:09:48.475 10:37:18 -- nvme/nvme.sh@97 -- # uname 00:09:48.475 10:37:18 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:48.475 10:37:18 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:09:48.475 10:37:18 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:48.475 10:37:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:48.475 10:37:18 -- common/autotest_common.sh@10 -- # set +x 00:09:48.475 ************************************ 00:09:48.475 START TEST nvme_multi_aen 00:09:48.475 ************************************ 00:09:48.475 10:37:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:09:48.475 [2024-12-03 10:37:19.007004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:48.475 [2024-12-03 10:37:19.007151] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.733 [2024-12-03 10:37:19.152073] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:48.733 [2024-12-03 10:37:19.152239] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:48.733 [2024-12-03 10:37:19.152629] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:48.734 [2024-12-03 10:37:19.152735] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:48.734 [2024-12-03 10:37:19.154245] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:48.734 [2024-12-03 10:37:19.154335] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:48.734 [2024-12-03 10:37:19.154425] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:48.734 [2024-12-03 10:37:19.154462] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:48.734 [2024-12-03 10:37:19.155700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:48.734 [2024-12-03 10:37:19.155784] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:48.734 [2024-12-03 10:37:19.155852] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:48.734 [2024-12-03 10:37:19.155887] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:48.734 [2024-12-03 10:37:19.156956] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:48.734 [2024-12-03 10:37:19.157035] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:48.734 [2024-12-03 10:37:19.157141] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:48.734 [2024-12-03 10:37:19.157209] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63955) is not found. Dropping the request. 00:09:48.734 [2024-12-03 10:37:19.167830] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:48.734 [2024-12-03 10:37:19.168350] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.734 Child process pid: 64485 00:09:48.734 [Child] Asynchronous Event Request test 00:09:48.734 [Child] Attached to 0000:00:06.0 00:09:48.734 [Child] Attached to 0000:00:07.0 00:09:48.734 [Child] Attached to 0000:00:09.0 00:09:48.734 [Child] Attached to 0000:00:08.0 00:09:48.734 [Child] Registering asynchronous event callbacks... 00:09:48.734 [Child] Getting orig temperature thresholds of all controllers 00:09:48.734 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:48.734 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:48.734 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:48.734 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:48.734 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:48.734 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:48.734 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:48.734 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:48.734 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:48.734 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.734 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.734 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.734 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.734 [Child] Cleaning up... 00:09:48.992 Asynchronous Event Request test 00:09:48.992 Attached to 0000:00:06.0 00:09:48.992 Attached to 0000:00:07.0 00:09:48.992 Attached to 0000:00:09.0 00:09:48.992 Attached to 0000:00:08.0 00:09:48.992 Reset controller to setup AER completions for this process 00:09:48.992 Registering asynchronous event callbacks... 00:09:48.992 Getting orig temperature thresholds of all controllers 00:09:48.992 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:48.992 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:48.992 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:48.992 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:48.992 Setting all controllers temperature threshold low to trigger AER 00:09:48.992 Waiting for all controllers temperature threshold to be set lower 00:09:48.992 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:48.992 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:48.992 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:48.992 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:48.992 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:48.992 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:48.992 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:48.992 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:48.992 Waiting for all controllers to trigger AER and reset threshold 00:09:48.992 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.992 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.992 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.992 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.992 Cleaning up... 00:09:48.992 00:09:48.992 real 0m0.414s 00:09:48.992 user 0m0.111s 00:09:48.992 sys 0m0.194s 00:09:48.992 10:37:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:48.992 10:37:19 -- common/autotest_common.sh@10 -- # set +x 00:09:48.992 ************************************ 00:09:48.992 END TEST nvme_multi_aen 00:09:48.992 ************************************ 00:09:48.992 10:37:19 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:48.992 10:37:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:48.992 10:37:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:48.992 10:37:19 -- common/autotest_common.sh@10 -- # set +x 00:09:48.992 ************************************ 00:09:48.992 START TEST nvme_startup 00:09:48.992 ************************************ 00:09:48.992 10:37:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:49.250 Initializing NVMe Controllers 00:09:49.250 Attached to 0000:00:06.0 00:09:49.250 Attached to 0000:00:07.0 00:09:49.250 Attached to 0000:00:09.0 00:09:49.250 Attached to 0000:00:08.0 00:09:49.250 Initialization complete. 00:09:49.250 Time used:130753.273 (us). 00:09:49.250 00:09:49.250 real 0m0.194s 00:09:49.250 user 0m0.051s 00:09:49.250 sys 0m0.102s 00:09:49.250 10:37:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:49.250 10:37:19 -- common/autotest_common.sh@10 -- # set +x 00:09:49.250 ************************************ 00:09:49.250 END TEST nvme_startup 00:09:49.250 ************************************ 00:09:49.250 10:37:19 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:49.250 10:37:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:49.250 10:37:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:49.250 10:37:19 -- common/autotest_common.sh@10 -- # set +x 00:09:49.250 ************************************ 00:09:49.250 START TEST nvme_multi_secondary 00:09:49.250 ************************************ 00:09:49.250 10:37:19 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:09:49.250 10:37:19 -- nvme/nvme.sh@52 -- # pid0=64530 00:09:49.250 10:37:19 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:49.250 10:37:19 -- nvme/nvme.sh@54 -- # pid1=64531 00:09:49.250 10:37:19 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:49.250 10:37:19 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:52.529 Initializing NVMe Controllers 00:09:52.530 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:52.530 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:52.530 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:52.530 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:52.530 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:09:52.530 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:09:52.530 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:09:52.530 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:09:52.530 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:09:52.530 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:09:52.530 Initialization complete. Launching workers. 00:09:52.530 ======================================================== 00:09:52.530 Latency(us) 00:09:52.530 Device Information : IOPS MiB/s Average min max 00:09:52.530 PCIE (0000:00:06.0) NSID 1 from core 1: 7303.79 28.53 2189.35 1331.16 6349.76 00:09:52.530 PCIE (0000:00:07.0) NSID 1 from core 1: 7303.79 28.53 2190.26 1359.80 6413.61 00:09:52.530 PCIE (0000:00:09.0) NSID 1 from core 1: 7303.79 28.53 2190.31 1388.34 6606.70 00:09:52.530 PCIE (0000:00:08.0) NSID 1 from core 1: 7303.79 28.53 2190.47 1347.13 6970.97 00:09:52.530 PCIE (0000:00:08.0) NSID 2 from core 1: 7303.79 28.53 2190.44 1365.02 7444.86 00:09:52.530 PCIE (0000:00:08.0) NSID 3 from core 1: 7303.79 28.53 2190.50 1243.44 7718.42 00:09:52.530 ======================================================== 00:09:52.530 Total : 43822.75 171.18 2190.22 1243.44 7718.42 00:09:52.530 00:09:52.787 Initializing NVMe Controllers 00:09:52.787 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:52.787 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:52.787 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:52.787 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:52.787 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:09:52.787 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:09:52.787 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:09:52.787 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:09:52.787 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:09:52.787 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:09:52.787 Initialization complete. Launching workers. 00:09:52.787 ======================================================== 00:09:52.787 Latency(us) 00:09:52.787 Device Information : IOPS MiB/s Average min max 00:09:52.787 PCIE (0000:00:06.0) NSID 1 from core 2: 2906.64 11.35 5502.97 976.18 12913.81 00:09:52.787 PCIE (0000:00:07.0) NSID 1 from core 2: 2906.64 11.35 5504.12 897.96 12848.54 00:09:52.787 PCIE (0000:00:09.0) NSID 1 from core 2: 2906.64 11.35 5504.10 969.58 16178.41 00:09:52.787 PCIE (0000:00:08.0) NSID 1 from core 2: 2906.64 11.35 5504.25 964.07 15722.62 00:09:52.787 PCIE (0000:00:08.0) NSID 2 from core 2: 2906.64 11.35 5503.75 910.55 12515.55 00:09:52.787 PCIE (0000:00:08.0) NSID 3 from core 2: 2906.64 11.35 5504.19 853.71 13105.88 00:09:52.787 ======================================================== 00:09:52.787 Total : 17439.86 68.12 5503.90 853.71 16178.41 00:09:52.787 00:09:52.787 10:37:23 -- nvme/nvme.sh@56 -- # wait 64530 00:09:54.693 Initializing NVMe Controllers 00:09:54.693 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:54.693 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:54.693 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:54.693 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:54.693 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:54.693 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:54.693 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:54.693 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:54.693 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:54.693 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:54.693 Initialization complete. Launching workers. 00:09:54.693 ======================================================== 00:09:54.693 Latency(us) 00:09:54.693 Device Information : IOPS MiB/s Average min max 00:09:54.693 PCIE (0000:00:06.0) NSID 1 from core 0: 10075.34 39.36 1586.86 769.96 6300.71 00:09:54.693 PCIE (0000:00:07.0) NSID 1 from core 0: 10075.34 39.36 1587.74 778.77 6300.15 00:09:54.693 PCIE (0000:00:09.0) NSID 1 from core 0: 10075.34 39.36 1587.72 784.34 6289.78 00:09:54.693 PCIE (0000:00:08.0) NSID 1 from core 0: 10075.34 39.36 1587.71 784.72 6600.23 00:09:54.693 PCIE (0000:00:08.0) NSID 2 from core 0: 10075.34 39.36 1587.68 779.99 6664.20 00:09:54.693 PCIE (0000:00:08.0) NSID 3 from core 0: 10075.34 39.36 1587.66 785.16 6278.96 00:09:54.693 ======================================================== 00:09:54.693 Total : 60452.01 236.14 1587.56 769.96 6664.20 00:09:54.693 00:09:54.693 10:37:24 -- nvme/nvme.sh@57 -- # wait 64531 00:09:54.693 10:37:24 -- nvme/nvme.sh@61 -- # pid0=64606 00:09:54.693 10:37:24 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:54.693 10:37:24 -- nvme/nvme.sh@63 -- # pid1=64607 00:09:54.693 10:37:24 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:54.693 10:37:24 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:58.019 Initializing NVMe Controllers 00:09:58.019 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:58.019 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:58.019 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:58.019 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:58.019 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:58.019 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:58.019 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:58.019 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:58.019 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:58.019 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:58.019 Initialization complete. Launching workers. 00:09:58.019 ======================================================== 00:09:58.019 Latency(us) 00:09:58.019 Device Information : IOPS MiB/s Average min max 00:09:58.019 PCIE (0000:00:06.0) NSID 1 from core 0: 4664.52 18.22 3428.58 1159.54 10787.45 00:09:58.019 PCIE (0000:00:07.0) NSID 1 from core 0: 4664.52 18.22 3429.65 1225.76 10169.65 00:09:58.019 PCIE (0000:00:09.0) NSID 1 from core 0: 4664.52 18.22 3430.32 1104.50 11193.88 00:09:58.019 PCIE (0000:00:08.0) NSID 1 from core 0: 4664.52 18.22 3430.28 1324.28 11355.96 00:09:58.019 PCIE (0000:00:08.0) NSID 2 from core 0: 4664.52 18.22 3430.60 1264.18 10615.50 00:09:58.019 PCIE (0000:00:08.0) NSID 3 from core 0: 4664.52 18.22 3430.98 1058.46 9449.12 00:09:58.019 ======================================================== 00:09:58.019 Total : 27987.14 109.32 3430.07 1058.46 11355.96 00:09:58.019 00:09:58.019 Initializing NVMe Controllers 00:09:58.019 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:58.019 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:58.019 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:58.019 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:58.019 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:09:58.019 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:09:58.019 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:09:58.019 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:09:58.019 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:09:58.019 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:09:58.019 Initialization complete. Launching workers. 00:09:58.019 ======================================================== 00:09:58.019 Latency(us) 00:09:58.019 Device Information : IOPS MiB/s Average min max 00:09:58.019 PCIE (0000:00:06.0) NSID 1 from core 1: 4626.69 18.07 3456.64 916.30 9769.05 00:09:58.019 PCIE (0000:00:07.0) NSID 1 from core 1: 4626.69 18.07 3457.74 932.63 10030.92 00:09:58.019 PCIE (0000:00:09.0) NSID 1 from core 1: 4626.69 18.07 3457.73 915.06 10932.06 00:09:58.019 PCIE (0000:00:08.0) NSID 1 from core 1: 4626.69 18.07 3457.98 928.21 9533.94 00:09:58.019 PCIE (0000:00:08.0) NSID 2 from core 1: 4626.69 18.07 3458.30 945.82 10384.63 00:09:58.019 PCIE (0000:00:08.0) NSID 3 from core 1: 4626.69 18.07 3458.23 941.94 9930.92 00:09:58.019 ======================================================== 00:09:58.019 Total : 27760.17 108.44 3457.77 915.06 10932.06 00:09:58.019 00:09:59.920 Initializing NVMe Controllers 00:09:59.920 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:59.920 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:59.920 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:59.920 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:59.920 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:09:59.920 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:09:59.920 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:09:59.920 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:09:59.920 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:09:59.920 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:09:59.920 Initialization complete. Launching workers. 00:09:59.920 ======================================================== 00:09:59.920 Latency(us) 00:09:59.920 Device Information : IOPS MiB/s Average min max 00:09:59.920 PCIE (0000:00:06.0) NSID 1 from core 2: 3589.35 14.02 4455.58 805.60 37153.74 00:09:59.920 PCIE (0000:00:07.0) NSID 1 from core 2: 3589.35 14.02 4457.07 824.24 36764.59 00:09:59.920 PCIE (0000:00:09.0) NSID 1 from core 2: 3589.35 14.02 4457.04 810.57 28351.90 00:09:59.920 PCIE (0000:00:08.0) NSID 1 from core 2: 3589.35 14.02 4457.01 817.08 37369.99 00:09:59.920 PCIE (0000:00:08.0) NSID 2 from core 2: 3589.35 14.02 4456.98 834.50 36234.45 00:09:59.920 PCIE (0000:00:08.0) NSID 3 from core 2: 3589.35 14.02 4457.17 818.92 33518.55 00:09:59.920 ======================================================== 00:09:59.920 Total : 21536.09 84.13 4456.81 805.60 37369.99 00:09:59.920 00:09:59.920 10:37:30 -- nvme/nvme.sh@65 -- # wait 64606 00:09:59.920 10:37:30 -- nvme/nvme.sh@66 -- # wait 64607 00:09:59.920 00:09:59.920 real 0m10.655s 00:09:59.920 user 0m18.612s 00:09:59.920 sys 0m0.683s 00:09:59.920 ************************************ 00:09:59.920 END TEST nvme_multi_secondary 00:09:59.920 ************************************ 00:09:59.920 10:37:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:59.920 10:37:30 -- common/autotest_common.sh@10 -- # set +x 00:09:59.920 10:37:30 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:59.920 10:37:30 -- nvme/nvme.sh@102 -- # kill_stub 00:09:59.920 10:37:30 -- common/autotest_common.sh@1075 -- # [[ -e /proc/63541 ]] 00:09:59.920 10:37:30 -- common/autotest_common.sh@1076 -- # kill 63541 00:09:59.920 10:37:30 -- common/autotest_common.sh@1077 -- # wait 63541 00:10:00.863 [2024-12-03 10:37:31.156270] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:00.863 [2024-12-03 10:37:31.156546] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:00.863 [2024-12-03 10:37:31.156575] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:00.863 [2024-12-03 10:37:31.156598] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:01.125 [2024-12-03 10:37:31.673987] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:01.125 [2024-12-03 10:37:31.674048] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:01.125 [2024-12-03 10:37:31.674076] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:01.125 [2024-12-03 10:37:31.674089] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:02.512 [2024-12-03 10:37:32.681584] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:02.512 [2024-12-03 10:37:32.681642] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:02.512 [2024-12-03 10:37:32.681653] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:02.512 [2024-12-03 10:37:32.681664] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:03.895 [2024-12-03 10:37:34.194622] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:03.895 [2024-12-03 10:37:34.194682] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:03.895 [2024-12-03 10:37:34.194693] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:03.896 [2024-12-03 10:37:34.194708] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:10:03.896 10:37:34 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:10:03.896 10:37:34 -- common/autotest_common.sh@1083 -- # echo 2 00:10:03.896 10:37:34 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:03.896 10:37:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:03.896 10:37:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:03.896 10:37:34 -- common/autotest_common.sh@10 -- # set +x 00:10:03.896 ************************************ 00:10:03.896 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:03.896 ************************************ 00:10:03.896 10:37:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:03.896 * Looking for test storage... 00:10:03.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:03.896 10:37:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:03.896 10:37:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:03.896 10:37:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:04.156 10:37:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:04.156 10:37:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:04.156 10:37:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:04.156 10:37:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:04.156 10:37:34 -- scripts/common.sh@335 -- # IFS=.-: 00:10:04.156 10:37:34 -- scripts/common.sh@335 -- # read -ra ver1 00:10:04.156 10:37:34 -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.156 10:37:34 -- scripts/common.sh@336 -- # read -ra ver2 00:10:04.156 10:37:34 -- scripts/common.sh@337 -- # local 'op=<' 00:10:04.156 10:37:34 -- scripts/common.sh@339 -- # ver1_l=2 00:10:04.156 10:37:34 -- scripts/common.sh@340 -- # ver2_l=1 00:10:04.156 10:37:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:04.156 10:37:34 -- scripts/common.sh@343 -- # case "$op" in 00:10:04.156 10:37:34 -- scripts/common.sh@344 -- # : 1 00:10:04.156 10:37:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:04.156 10:37:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.156 10:37:34 -- scripts/common.sh@364 -- # decimal 1 00:10:04.156 10:37:34 -- scripts/common.sh@352 -- # local d=1 00:10:04.156 10:37:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.156 10:37:34 -- scripts/common.sh@354 -- # echo 1 00:10:04.156 10:37:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:04.156 10:37:34 -- scripts/common.sh@365 -- # decimal 2 00:10:04.156 10:37:34 -- scripts/common.sh@352 -- # local d=2 00:10:04.156 10:37:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.157 10:37:34 -- scripts/common.sh@354 -- # echo 2 00:10:04.157 10:37:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:04.157 10:37:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:04.157 10:37:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:04.157 10:37:34 -- scripts/common.sh@367 -- # return 0 00:10:04.157 10:37:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.157 10:37:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:04.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.157 --rc genhtml_branch_coverage=1 00:10:04.157 --rc genhtml_function_coverage=1 00:10:04.157 --rc genhtml_legend=1 00:10:04.157 --rc geninfo_all_blocks=1 00:10:04.157 --rc geninfo_unexecuted_blocks=1 00:10:04.157 00:10:04.157 ' 00:10:04.157 10:37:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:04.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.157 --rc genhtml_branch_coverage=1 00:10:04.157 --rc genhtml_function_coverage=1 00:10:04.157 --rc genhtml_legend=1 00:10:04.157 --rc geninfo_all_blocks=1 00:10:04.157 --rc geninfo_unexecuted_blocks=1 00:10:04.157 00:10:04.157 ' 00:10:04.157 10:37:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:04.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.157 --rc genhtml_branch_coverage=1 00:10:04.157 --rc genhtml_function_coverage=1 00:10:04.157 --rc genhtml_legend=1 00:10:04.157 --rc geninfo_all_blocks=1 00:10:04.157 --rc geninfo_unexecuted_blocks=1 00:10:04.157 00:10:04.157 ' 00:10:04.157 10:37:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:04.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.157 --rc genhtml_branch_coverage=1 00:10:04.157 --rc genhtml_function_coverage=1 00:10:04.157 --rc genhtml_legend=1 00:10:04.157 --rc geninfo_all_blocks=1 00:10:04.157 --rc geninfo_unexecuted_blocks=1 00:10:04.157 00:10:04.157 ' 00:10:04.157 10:37:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:04.157 10:37:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:04.157 10:37:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:04.157 10:37:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:04.157 10:37:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:04.157 10:37:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:04.157 10:37:34 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:04.157 10:37:34 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:04.157 10:37:34 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:04.157 10:37:34 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:04.157 10:37:34 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:04.157 10:37:34 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:04.157 10:37:34 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:04.157 10:37:34 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:04.157 10:37:34 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:04.157 10:37:34 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:04.157 10:37:34 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:04.157 10:37:34 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:04.157 10:37:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:04.157 10:37:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:04.157 10:37:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64802 00:10:04.157 10:37:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:04.157 10:37:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:04.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.157 10:37:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64802 00:10:04.157 10:37:34 -- common/autotest_common.sh@829 -- # '[' -z 64802 ']' 00:10:04.157 10:37:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.157 10:37:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:04.157 10:37:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.157 10:37:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:04.157 10:37:34 -- common/autotest_common.sh@10 -- # set +x 00:10:04.157 [2024-12-03 10:37:34.649614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:04.157 [2024-12-03 10:37:34.649856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64802 ] 00:10:04.418 [2024-12-03 10:37:34.812077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.418 [2024-12-03 10:37:35.013565] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:04.418 [2024-12-03 10:37:35.014150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.418 [2024-12-03 10:37:35.014294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.418 [2024-12-03 10:37:35.014578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.418 [2024-12-03 10:37:35.014670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.830 10:37:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:05.830 10:37:36 -- common/autotest_common.sh@862 -- # return 0 00:10:05.830 10:37:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:10:05.830 10:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.830 10:37:36 -- common/autotest_common.sh@10 -- # set +x 00:10:05.830 nvme0n1 00:10:05.830 10:37:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.830 10:37:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:05.830 10:37:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_LZgpZ.txt 00:10:05.830 10:37:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:05.830 10:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.830 10:37:36 -- common/autotest_common.sh@10 -- # set +x 00:10:05.830 true 00:10:05.830 10:37:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.830 10:37:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:05.830 10:37:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733222256 00:10:05.830 10:37:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64838 00:10:05.830 10:37:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:05.830 10:37:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:05.830 10:37:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:07.742 10:37:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.742 10:37:38 -- common/autotest_common.sh@10 -- # set +x 00:10:07.742 [2024-12-03 10:37:38.221443] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:07.742 [2024-12-03 10:37:38.221770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:07.742 [2024-12-03 10:37:38.221798] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:07.742 [2024-12-03 10:37:38.221814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:07.742 [2024-12-03 10:37:38.223755] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:07.742 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64838 00:10:07.742 10:37:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64838 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64838 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:07.742 10:37:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.742 10:37:38 -- common/autotest_common.sh@10 -- # set +x 00:10:07.742 10:37:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_LZgpZ.txt 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_LZgpZ.txt 00:10:07.742 10:37:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64802 00:10:07.742 10:37:38 -- common/autotest_common.sh@936 -- # '[' -z 64802 ']' 00:10:07.742 10:37:38 -- common/autotest_common.sh@940 -- # kill -0 64802 00:10:07.742 10:37:38 -- common/autotest_common.sh@941 -- # uname 00:10:07.742 10:37:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:07.742 10:37:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64802 00:10:07.742 killing process with pid 64802 00:10:07.742 10:37:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:07.742 10:37:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:07.742 10:37:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64802' 00:10:07.742 10:37:38 -- common/autotest_common.sh@955 -- # kill 64802 00:10:07.742 10:37:38 -- common/autotest_common.sh@960 -- # wait 64802 00:10:09.654 10:37:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:09.654 10:37:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:09.654 00:10:09.654 real 0m5.396s 00:10:09.654 user 0m19.032s 00:10:09.654 sys 0m0.563s 00:10:09.654 ************************************ 00:10:09.654 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:09.654 ************************************ 00:10:09.654 10:37:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:09.654 10:37:39 -- common/autotest_common.sh@10 -- # set +x 00:10:09.654 10:37:39 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:09.655 10:37:39 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:09.655 10:37:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:09.655 10:37:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:09.655 10:37:39 -- common/autotest_common.sh@10 -- # set +x 00:10:09.655 ************************************ 00:10:09.655 START TEST nvme_fio 00:10:09.655 ************************************ 00:10:09.655 10:37:39 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:10:09.655 10:37:39 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:09.655 10:37:39 -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:09.655 10:37:39 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:09.655 10:37:39 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:09.655 10:37:39 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:09.655 10:37:39 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:09.655 10:37:39 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:09.655 10:37:39 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:09.655 10:37:39 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:09.655 10:37:39 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:09.655 10:37:39 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:10:09.655 10:37:39 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:09.655 10:37:39 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:09.655 10:37:39 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:09.655 10:37:39 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:09.655 10:37:40 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:09.655 10:37:40 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:09.914 10:37:40 -- nvme/nvme.sh@41 -- # bs=4096 00:10:09.914 10:37:40 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:09.914 10:37:40 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:09.914 10:37:40 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:09.914 10:37:40 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:09.914 10:37:40 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:09.914 10:37:40 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:09.914 10:37:40 -- common/autotest_common.sh@1330 -- # shift 00:10:09.914 10:37:40 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:09.914 10:37:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:09.914 10:37:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:09.914 10:37:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:09.914 10:37:40 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:09.914 10:37:40 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:09.914 10:37:40 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:09.914 10:37:40 -- common/autotest_common.sh@1336 -- # break 00:10:09.914 10:37:40 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:09.914 10:37:40 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:09.914 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:09.914 fio-3.35 00:10:09.914 Starting 1 thread 00:10:14.116 00:10:14.116 test: (groupid=0, jobs=1): err= 0: pid=64973: Tue Dec 3 10:37:44 2024 00:10:14.116 read: IOPS=19.0k, BW=74.2MiB/s (77.8MB/s)(149MiB/2001msec) 00:10:14.116 slat (nsec): min=4135, max=74452, avg=5145.51, stdev=2569.82 00:10:14.116 clat (usec): min=257, max=8526, avg=2290.97, stdev=1044.10 00:10:14.116 lat (usec): min=262, max=8530, avg=2296.12, stdev=1045.44 00:10:14.116 clat percentiles (usec): 00:10:14.116 | 1.00th=[ 1074], 5.00th=[ 1205], 10.00th=[ 1303], 20.00th=[ 1483], 00:10:14.116 | 30.00th=[ 1696], 40.00th=[ 1975], 50.00th=[ 2180], 60.00th=[ 2311], 00:10:14.116 | 70.00th=[ 2442], 80.00th=[ 2638], 90.00th=[ 3359], 95.00th=[ 4490], 00:10:14.116 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 7504], 00:10:14.116 | 99.99th=[ 7963] 00:10:14.116 bw ( KiB/s): min=70208, max=97320, per=100.00%, avg=79341.33, stdev=15570.65, samples=3 00:10:14.116 iops : min=17552, max=24330, avg=19835.33, stdev=3892.66, samples=3 00:10:14.116 write: IOPS=19.0k, BW=74.2MiB/s (77.8MB/s)(149MiB/2001msec); 0 zone resets 00:10:14.116 slat (usec): min=4, max=267, avg= 5.45, stdev= 2.85 00:10:14.116 clat (usec): min=200, max=28363, avg=4419.09, stdev=4042.98 00:10:14.116 lat (usec): min=205, max=28368, avg=4424.53, stdev=4043.23 00:10:14.116 clat percentiles (usec): 00:10:14.116 | 1.00th=[ 1156], 5.00th=[ 1352], 10.00th=[ 1532], 20.00th=[ 1926], 00:10:14.116 | 30.00th=[ 2180], 40.00th=[ 2343], 50.00th=[ 2474], 60.00th=[ 2737], 00:10:14.116 | 70.00th=[ 4113], 80.00th=[ 7111], 90.00th=[10552], 95.00th=[13304], 00:10:14.116 | 99.00th=[18744], 99.50th=[21103], 99.90th=[26346], 99.95th=[27132], 00:10:14.116 | 99.99th=[27657] 00:10:14.116 bw ( KiB/s): min=70288, max=96984, per=100.00%, avg=79202.67, stdev=15399.11, samples=3 00:10:14.116 iops : min=17572, max=24246, avg=19800.67, stdev=3849.78, samples=3 00:10:14.116 lat (usec) : 250=0.01%, 500=0.02%, 750=0.04%, 1000=0.16% 00:10:14.116 lat (msec) : 2=31.21%, 4=50.06%, 10=12.86%, 20=5.30%, 50=0.35% 00:10:14.116 cpu : usr=99.15%, sys=0.15%, ctx=2, majf=0, minf=608 00:10:14.116 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:14.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.116 issued rwts: total=38016,38018,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.116 00:10:14.116 Run status group 0 (all jobs): 00:10:14.116 READ: bw=74.2MiB/s (77.8MB/s), 74.2MiB/s-74.2MiB/s (77.8MB/s-77.8MB/s), io=149MiB (156MB), run=2001-2001msec 00:10:14.116 WRITE: bw=74.2MiB/s (77.8MB/s), 74.2MiB/s-74.2MiB/s (77.8MB/s-77.8MB/s), io=149MiB (156MB), run=2001-2001msec 00:10:14.116 ----------------------------------------------------- 00:10:14.116 Suppressions used: 00:10:14.116 count bytes template 00:10:14.116 1 32 /usr/src/fio/parse.c 00:10:14.116 1 8 libtcmalloc_minimal.so 00:10:14.116 ----------------------------------------------------- 00:10:14.116 00:10:14.116 10:37:44 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:14.116 10:37:44 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:14.116 10:37:44 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:14.116 10:37:44 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:14.116 10:37:44 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:14.116 10:37:44 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:14.375 10:37:44 -- nvme/nvme.sh@41 -- # bs=4096 00:10:14.375 10:37:44 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:14.375 10:37:44 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:14.375 10:37:44 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:14.375 10:37:44 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:14.375 10:37:44 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:14.375 10:37:44 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:14.375 10:37:44 -- common/autotest_common.sh@1330 -- # shift 00:10:14.375 10:37:44 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:14.375 10:37:44 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:14.375 10:37:44 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:14.375 10:37:44 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:14.375 10:37:44 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:14.375 10:37:44 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:14.375 10:37:44 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:14.375 10:37:44 -- common/autotest_common.sh@1336 -- # break 00:10:14.375 10:37:44 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:14.375 10:37:44 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:14.640 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:14.640 fio-3.35 00:10:14.640 Starting 1 thread 00:10:21.211 00:10:21.211 test: (groupid=0, jobs=1): err= 0: pid=65034: Tue Dec 3 10:37:51 2024 00:10:21.211 read: IOPS=23.6k, BW=92.3MiB/s (96.8MB/s)(185MiB/2001msec) 00:10:21.211 slat (nsec): min=4156, max=65717, avg=5051.01, stdev=2432.91 00:10:21.211 clat (usec): min=217, max=8047, avg=2706.02, stdev=899.38 00:10:21.211 lat (usec): min=221, max=8065, avg=2711.07, stdev=901.01 00:10:21.211 clat percentiles (usec): 00:10:21.211 | 1.00th=[ 1680], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2376], 00:10:21.211 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:10:21.211 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 3523], 95.00th=[ 5145], 00:10:21.211 | 99.00th=[ 6259], 99.50th=[ 6652], 99.90th=[ 7504], 99.95th=[ 7701], 00:10:21.211 | 99.99th=[ 7963] 00:10:21.211 bw ( KiB/s): min=88936, max=100360, per=100.00%, avg=94634.67, stdev=5712.05, samples=3 00:10:21.211 iops : min=22234, max=25090, avg=23658.67, stdev=1428.01, samples=3 00:10:21.211 write: IOPS=23.5k, BW=91.7MiB/s (96.1MB/s)(183MiB/2001msec); 0 zone resets 00:10:21.211 slat (nsec): min=4233, max=61109, avg=5307.28, stdev=2400.21 00:10:21.211 clat (usec): min=209, max=8033, avg=2706.98, stdev=899.05 00:10:21.211 lat (usec): min=213, max=8046, avg=2712.28, stdev=900.65 00:10:21.211 clat percentiles (usec): 00:10:21.211 | 1.00th=[ 1680], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2376], 00:10:21.211 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:10:21.211 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 3523], 95.00th=[ 5145], 00:10:21.211 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 7570], 99.95th=[ 7701], 00:10:21.211 | 99.99th=[ 7963] 00:10:21.211 bw ( KiB/s): min=88344, max=99960, per=100.00%, avg=94621.33, stdev=5864.61, samples=3 00:10:21.211 iops : min=22086, max=24990, avg=23655.33, stdev=1466.15, samples=3 00:10:21.211 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.04% 00:10:21.211 lat (msec) : 2=2.81%, 4=88.81%, 10=8.31% 00:10:21.211 cpu : usr=99.25%, sys=0.05%, ctx=5, majf=0, minf=608 00:10:21.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:21.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.211 issued rwts: total=47278,46963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.211 00:10:21.211 Run status group 0 (all jobs): 00:10:21.211 READ: bw=92.3MiB/s (96.8MB/s), 92.3MiB/s-92.3MiB/s (96.8MB/s-96.8MB/s), io=185MiB (194MB), run=2001-2001msec 00:10:21.211 WRITE: bw=91.7MiB/s (96.1MB/s), 91.7MiB/s-91.7MiB/s (96.1MB/s-96.1MB/s), io=183MiB (192MB), run=2001-2001msec 00:10:21.211 ----------------------------------------------------- 00:10:21.211 Suppressions used: 00:10:21.211 count bytes template 00:10:21.211 1 32 /usr/src/fio/parse.c 00:10:21.211 1 8 libtcmalloc_minimal.so 00:10:21.211 ----------------------------------------------------- 00:10:21.211 00:10:21.211 10:37:51 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:21.211 10:37:51 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:21.211 10:37:51 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:21.211 10:37:51 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:21.472 10:37:51 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:21.472 10:37:51 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:21.734 10:37:52 -- nvme/nvme.sh@41 -- # bs=4096 00:10:21.734 10:37:52 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:21.734 10:37:52 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:21.734 10:37:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:21.734 10:37:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:21.734 10:37:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:21.734 10:37:52 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:21.734 10:37:52 -- common/autotest_common.sh@1330 -- # shift 00:10:21.734 10:37:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:21.734 10:37:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:21.734 10:37:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:21.734 10:37:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:21.734 10:37:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:21.734 10:37:52 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:21.734 10:37:52 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:21.734 10:37:52 -- common/autotest_common.sh@1336 -- # break 00:10:21.734 10:37:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:21.734 10:37:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:21.996 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:21.996 fio-3.35 00:10:21.996 Starting 1 thread 00:10:27.282 00:10:27.283 test: (groupid=0, jobs=1): err= 0: pid=65101: Tue Dec 3 10:37:57 2024 00:10:27.283 read: IOPS=14.9k, BW=58.4MiB/s (61.2MB/s)(117MiB/2001msec) 00:10:27.283 slat (nsec): min=4846, max=79747, avg=6352.72, stdev=3177.23 00:10:27.283 clat (usec): min=637, max=11760, avg=4248.49, stdev=1022.11 00:10:27.283 lat (usec): min=649, max=11820, avg=4254.84, stdev=1023.36 00:10:27.283 clat percentiles (usec): 00:10:27.283 | 1.00th=[ 2737], 5.00th=[ 3261], 10.00th=[ 3425], 20.00th=[ 3556], 00:10:27.283 | 30.00th=[ 3687], 40.00th=[ 3785], 50.00th=[ 3916], 60.00th=[ 4080], 00:10:27.283 | 70.00th=[ 4293], 80.00th=[ 4817], 90.00th=[ 5800], 95.00th=[ 6521], 00:10:27.283 | 99.00th=[ 7570], 99.50th=[ 7963], 99.90th=[ 8979], 99.95th=[10028], 00:10:27.283 | 99.99th=[11731] 00:10:27.283 bw ( KiB/s): min=56120, max=62840, per=99.27%, avg=59314.67, stdev=3372.18, samples=3 00:10:27.283 iops : min=14030, max=15710, avg=14828.67, stdev=843.05, samples=3 00:10:27.283 write: IOPS=14.9k, BW=58.3MiB/s (61.2MB/s)(117MiB/2001msec); 0 zone resets 00:10:27.283 slat (nsec): min=4976, max=90003, avg=6747.63, stdev=3283.00 00:10:27.283 clat (usec): min=592, max=11693, avg=4291.61, stdev=1031.24 00:10:27.283 lat (usec): min=605, max=11705, avg=4298.36, stdev=1032.46 00:10:27.283 clat percentiles (usec): 00:10:27.283 | 1.00th=[ 2769], 5.00th=[ 3326], 10.00th=[ 3458], 20.00th=[ 3589], 00:10:27.283 | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3949], 60.00th=[ 4113], 00:10:27.283 | 70.00th=[ 4359], 80.00th=[ 4883], 90.00th=[ 5866], 95.00th=[ 6587], 00:10:27.283 | 99.00th=[ 7635], 99.50th=[ 8029], 99.90th=[ 9241], 99.95th=[10028], 00:10:27.283 | 99.99th=[10552] 00:10:27.283 bw ( KiB/s): min=56376, max=62632, per=98.96%, avg=59122.67, stdev=3196.97, samples=3 00:10:27.283 iops : min=14094, max=15658, avg=14780.67, stdev=799.24, samples=3 00:10:27.283 lat (usec) : 750=0.01%, 1000=0.01% 00:10:27.283 lat (msec) : 2=0.16%, 4=54.38%, 10=45.39%, 20=0.05% 00:10:27.283 cpu : usr=98.95%, sys=0.00%, ctx=6, majf=0, minf=608 00:10:27.283 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:27.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.283 issued rwts: total=29891,29887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.283 00:10:27.283 Run status group 0 (all jobs): 00:10:27.283 READ: bw=58.4MiB/s (61.2MB/s), 58.4MiB/s-58.4MiB/s (61.2MB/s-61.2MB/s), io=117MiB (122MB), run=2001-2001msec 00:10:27.283 WRITE: bw=58.3MiB/s (61.2MB/s), 58.3MiB/s-58.3MiB/s (61.2MB/s-61.2MB/s), io=117MiB (122MB), run=2001-2001msec 00:10:27.283 ----------------------------------------------------- 00:10:27.283 Suppressions used: 00:10:27.283 count bytes template 00:10:27.283 1 32 /usr/src/fio/parse.c 00:10:27.283 1 8 libtcmalloc_minimal.so 00:10:27.283 ----------------------------------------------------- 00:10:27.283 00:10:27.283 10:37:57 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:27.283 10:37:57 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:27.283 10:37:57 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:27.283 10:37:57 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:27.283 10:37:57 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:27.283 10:37:57 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:27.544 10:37:57 -- nvme/nvme.sh@41 -- # bs=4096 00:10:27.544 10:37:57 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:27.544 10:37:57 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:27.544 10:37:57 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:27.544 10:37:57 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:27.544 10:37:57 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:27.544 10:37:57 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:27.544 10:37:57 -- common/autotest_common.sh@1330 -- # shift 00:10:27.544 10:37:57 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:27.544 10:37:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:27.544 10:37:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:27.544 10:37:57 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:27.544 10:37:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:27.544 10:37:57 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:27.544 10:37:57 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:27.544 10:37:57 -- common/autotest_common.sh@1336 -- # break 00:10:27.544 10:37:57 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:27.544 10:37:57 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:27.544 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:27.544 fio-3.35 00:10:27.544 Starting 1 thread 00:10:34.180 00:10:34.181 test: (groupid=0, jobs=1): err= 0: pid=65183: Tue Dec 3 10:38:04 2024 00:10:34.181 read: IOPS=14.2k, BW=55.5MiB/s (58.2MB/s)(111MiB/2001msec) 00:10:34.181 slat (nsec): min=6063, max=91603, avg=8064.34, stdev=3828.86 00:10:34.181 clat (usec): min=1081, max=13082, avg=4461.54, stdev=1238.61 00:10:34.181 lat (usec): min=1088, max=13152, avg=4469.60, stdev=1240.21 00:10:34.181 clat percentiles (usec): 00:10:34.181 | 1.00th=[ 3130], 5.00th=[ 3326], 10.00th=[ 3458], 20.00th=[ 3589], 00:10:34.181 | 30.00th=[ 3687], 40.00th=[ 3818], 50.00th=[ 3916], 60.00th=[ 4113], 00:10:34.181 | 70.00th=[ 4621], 80.00th=[ 5473], 90.00th=[ 6456], 95.00th=[ 7111], 00:10:34.181 | 99.00th=[ 8225], 99.50th=[ 8717], 99.90th=[ 9765], 99.95th=[10814], 00:10:34.181 | 99.99th=[13042] 00:10:34.181 bw ( KiB/s): min=54800, max=58944, per=100.00%, avg=57530.67, stdev=2365.31, samples=3 00:10:34.181 iops : min=13700, max=14736, avg=14382.67, stdev=591.33, samples=3 00:10:34.181 write: IOPS=14.2k, BW=55.6MiB/s (58.3MB/s)(111MiB/2001msec); 0 zone resets 00:10:34.181 slat (nsec): min=6202, max=61904, avg=8423.83, stdev=3764.80 00:10:34.181 clat (usec): min=1093, max=12999, avg=4498.77, stdev=1243.41 00:10:34.181 lat (usec): min=1101, max=13016, avg=4507.19, stdev=1244.93 00:10:34.181 clat percentiles (usec): 00:10:34.181 | 1.00th=[ 3195], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3621], 00:10:34.181 | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3949], 60.00th=[ 4146], 00:10:34.181 | 70.00th=[ 4686], 80.00th=[ 5538], 90.00th=[ 6456], 95.00th=[ 7111], 00:10:34.181 | 99.00th=[ 8225], 99.50th=[ 8717], 99.90th=[ 9765], 99.95th=[11207], 00:10:34.181 | 99.99th=[12911] 00:10:34.181 bw ( KiB/s): min=55120, max=59240, per=100.00%, avg=57429.33, stdev=2104.78, samples=3 00:10:34.181 iops : min=13780, max=14810, avg=14357.33, stdev=526.20, samples=3 00:10:34.181 lat (msec) : 2=0.07%, 4=54.30%, 10=45.55%, 20=0.08% 00:10:34.181 cpu : usr=98.65%, sys=0.20%, ctx=4, majf=0, minf=606 00:10:34.181 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:34.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.181 issued rwts: total=28454,28488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.181 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.181 00:10:34.181 Run status group 0 (all jobs): 00:10:34.181 READ: bw=55.5MiB/s (58.2MB/s), 55.5MiB/s-55.5MiB/s (58.2MB/s-58.2MB/s), io=111MiB (117MB), run=2001-2001msec 00:10:34.181 WRITE: bw=55.6MiB/s (58.3MB/s), 55.6MiB/s-55.6MiB/s (58.3MB/s-58.3MB/s), io=111MiB (117MB), run=2001-2001msec 00:10:34.443 ----------------------------------------------------- 00:10:34.443 Suppressions used: 00:10:34.443 count bytes template 00:10:34.443 1 32 /usr/src/fio/parse.c 00:10:34.443 1 8 libtcmalloc_minimal.so 00:10:34.443 ----------------------------------------------------- 00:10:34.443 00:10:34.443 10:38:04 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:34.443 10:38:04 -- nvme/nvme.sh@46 -- # true 00:10:34.443 00:10:34.443 real 0m25.076s 00:10:34.443 user 0m16.724s 00:10:34.443 sys 0m13.759s 00:10:34.443 10:38:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:34.443 ************************************ 00:10:34.443 END TEST nvme_fio 00:10:34.443 ************************************ 00:10:34.443 10:38:04 -- common/autotest_common.sh@10 -- # set +x 00:10:34.443 ************************************ 00:10:34.443 END TEST nvme 00:10:34.443 ************************************ 00:10:34.443 00:10:34.443 real 1m40.154s 00:10:34.443 user 3m42.576s 00:10:34.443 sys 0m24.410s 00:10:34.443 10:38:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:34.443 10:38:04 -- common/autotest_common.sh@10 -- # set +x 00:10:34.443 10:38:05 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:10:34.443 10:38:05 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:34.443 10:38:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:34.443 10:38:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:34.443 10:38:05 -- common/autotest_common.sh@10 -- # set +x 00:10:34.443 ************************************ 00:10:34.443 START TEST nvme_scc 00:10:34.443 ************************************ 00:10:34.443 10:38:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:34.704 * Looking for test storage... 00:10:34.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:34.704 10:38:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:34.704 10:38:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:34.704 10:38:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:34.704 10:38:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:34.704 10:38:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:34.704 10:38:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:34.704 10:38:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:34.704 10:38:05 -- scripts/common.sh@335 -- # IFS=.-: 00:10:34.704 10:38:05 -- scripts/common.sh@335 -- # read -ra ver1 00:10:34.704 10:38:05 -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.704 10:38:05 -- scripts/common.sh@336 -- # read -ra ver2 00:10:34.704 10:38:05 -- scripts/common.sh@337 -- # local 'op=<' 00:10:34.704 10:38:05 -- scripts/common.sh@339 -- # ver1_l=2 00:10:34.704 10:38:05 -- scripts/common.sh@340 -- # ver2_l=1 00:10:34.704 10:38:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:34.704 10:38:05 -- scripts/common.sh@343 -- # case "$op" in 00:10:34.704 10:38:05 -- scripts/common.sh@344 -- # : 1 00:10:34.704 10:38:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:34.704 10:38:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.704 10:38:05 -- scripts/common.sh@364 -- # decimal 1 00:10:34.704 10:38:05 -- scripts/common.sh@352 -- # local d=1 00:10:34.704 10:38:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.704 10:38:05 -- scripts/common.sh@354 -- # echo 1 00:10:34.704 10:38:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:34.704 10:38:05 -- scripts/common.sh@365 -- # decimal 2 00:10:34.704 10:38:05 -- scripts/common.sh@352 -- # local d=2 00:10:34.704 10:38:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.704 10:38:05 -- scripts/common.sh@354 -- # echo 2 00:10:34.704 10:38:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:34.704 10:38:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:34.704 10:38:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:34.704 10:38:05 -- scripts/common.sh@367 -- # return 0 00:10:34.704 10:38:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.704 10:38:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:34.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.704 --rc genhtml_branch_coverage=1 00:10:34.704 --rc genhtml_function_coverage=1 00:10:34.704 --rc genhtml_legend=1 00:10:34.704 --rc geninfo_all_blocks=1 00:10:34.704 --rc geninfo_unexecuted_blocks=1 00:10:34.704 00:10:34.704 ' 00:10:34.704 10:38:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:34.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.704 --rc genhtml_branch_coverage=1 00:10:34.704 --rc genhtml_function_coverage=1 00:10:34.704 --rc genhtml_legend=1 00:10:34.704 --rc geninfo_all_blocks=1 00:10:34.704 --rc geninfo_unexecuted_blocks=1 00:10:34.704 00:10:34.704 ' 00:10:34.704 10:38:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:34.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.704 --rc genhtml_branch_coverage=1 00:10:34.704 --rc genhtml_function_coverage=1 00:10:34.704 --rc genhtml_legend=1 00:10:34.704 --rc geninfo_all_blocks=1 00:10:34.704 --rc geninfo_unexecuted_blocks=1 00:10:34.704 00:10:34.704 ' 00:10:34.704 10:38:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:34.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.704 --rc genhtml_branch_coverage=1 00:10:34.704 --rc genhtml_function_coverage=1 00:10:34.704 --rc genhtml_legend=1 00:10:34.704 --rc geninfo_all_blocks=1 00:10:34.704 --rc geninfo_unexecuted_blocks=1 00:10:34.704 00:10:34.704 ' 00:10:34.704 10:38:05 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:34.704 10:38:05 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:34.704 10:38:05 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:34.704 10:38:05 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:34.704 10:38:05 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:34.704 10:38:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.704 10:38:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.704 10:38:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.704 10:38:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.704 10:38:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.704 10:38:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.704 10:38:05 -- paths/export.sh@5 -- # export PATH 00:10:34.704 10:38:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.704 10:38:05 -- nvme/functions.sh@10 -- # ctrls=() 00:10:34.704 10:38:05 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:34.704 10:38:05 -- nvme/functions.sh@11 -- # nvmes=() 00:10:34.704 10:38:05 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:34.704 10:38:05 -- nvme/functions.sh@12 -- # bdfs=() 00:10:34.704 10:38:05 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:34.704 10:38:05 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:34.704 10:38:05 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:34.704 10:38:05 -- nvme/functions.sh@14 -- # nvme_name= 00:10:34.704 10:38:05 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:34.704 10:38:05 -- nvme/nvme_scc.sh@12 -- # uname 00:10:34.704 10:38:05 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:34.704 10:38:05 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:34.704 10:38:05 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:35.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:35.276 Waiting for block devices as requested 00:10:35.276 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:35.276 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:35.537 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:35.537 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:40.840 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:40.840 10:38:11 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:40.840 10:38:11 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:40.840 10:38:11 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:40.840 10:38:11 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:40.840 10:38:11 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:10:40.840 10:38:11 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:10:40.840 10:38:11 -- scripts/common.sh@15 -- # local i 00:10:40.840 10:38:11 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:10:40.840 10:38:11 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:40.840 10:38:11 -- scripts/common.sh@24 -- # return 0 00:10:40.840 10:38:11 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:40.840 10:38:11 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:40.840 10:38:11 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:40.840 10:38:11 -- nvme/functions.sh@18 -- # shift 00:10:40.840 10:38:11 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:40.840 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.840 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.840 10:38:11 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:40.840 10:38:11 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.840 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.840 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.840 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:40.840 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:40.840 10:38:11 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:40.840 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.840 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.840 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:40.840 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:40.840 10:38:11 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:40.840 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.840 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.840 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:40.840 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:10:40.840 10:38:11 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:10:40.840 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.840 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.840 10:38:11 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:40.840 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:40.840 10:38:11 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:40.840 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.840 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.840 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:40.840 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:40.840 10:38:11 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:40.840 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.840 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.840 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:40.840 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:40.840 10:38:11 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.841 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.841 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.841 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.842 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:40.842 10:38:11 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:40.842 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:40.843 10:38:11 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:40.843 10:38:11 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:40.843 10:38:11 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:10:40.843 10:38:11 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:40.843 10:38:11 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:40.843 10:38:11 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:10:40.843 10:38:11 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:10:40.843 10:38:11 -- scripts/common.sh@15 -- # local i 00:10:40.843 10:38:11 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:10:40.843 10:38:11 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:40.843 10:38:11 -- scripts/common.sh@24 -- # return 0 00:10:40.843 10:38:11 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:40.843 10:38:11 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:40.843 10:38:11 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@18 -- # shift 00:10:40.843 10:38:11 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.843 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.843 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:40.843 10:38:11 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.844 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:40.844 10:38:11 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.844 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.845 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:40.845 10:38:11 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:40.845 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:40.846 10:38:11 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:40.846 10:38:11 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:40.846 10:38:11 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:40.846 10:38:11 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@18 -- # shift 00:10:40.846 10:38:11 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.846 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.846 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:40.846 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:40.847 10:38:11 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:40.847 10:38:11 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:10:40.847 10:38:11 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:10:40.847 10:38:11 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@18 -- # shift 00:10:40.847 10:38:11 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.847 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:10:40.847 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.847 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:40.848 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.848 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.848 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:10:40.849 10:38:11 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:40.849 10:38:11 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:10:40.849 10:38:11 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:10:40.849 10:38:11 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@18 -- # shift 00:10:40.849 10:38:11 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.849 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:10:40.849 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:10:40.849 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:10:40.850 10:38:11 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:40.850 10:38:11 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:40.850 10:38:11 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:10:40.850 10:38:11 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:40.850 10:38:11 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:40.850 10:38:11 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:10:40.850 10:38:11 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:10:40.850 10:38:11 -- scripts/common.sh@15 -- # local i 00:10:40.850 10:38:11 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:10:40.850 10:38:11 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:40.850 10:38:11 -- scripts/common.sh@24 -- # return 0 00:10:40.850 10:38:11 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:40.850 10:38:11 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:40.850 10:38:11 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@18 -- # shift 00:10:40.850 10:38:11 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.850 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:10:40.850 10:38:11 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.850 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.851 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:40.851 10:38:11 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.851 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.852 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.852 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:40.852 10:38:11 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:40.853 10:38:11 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:40.853 10:38:11 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:40.853 10:38:11 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:40.853 10:38:11 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@18 -- # shift 00:10:40.853 10:38:11 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.853 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.853 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:40.853 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:40.854 10:38:11 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.854 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.854 10:38:11 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:40.854 10:38:11 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:40.854 10:38:11 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:40.854 10:38:11 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:10:40.854 10:38:11 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:40.854 10:38:11 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:40.854 10:38:11 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:40.854 10:38:11 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:10:40.854 10:38:11 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:10:40.854 10:38:11 -- scripts/common.sh@15 -- # local i 00:10:40.854 10:38:11 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:10:40.854 10:38:11 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:40.855 10:38:11 -- scripts/common.sh@24 -- # return 0 00:10:40.855 10:38:11 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:40.855 10:38:11 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:40.855 10:38:11 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@18 -- # shift 00:10:40.855 10:38:11 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.855 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:40.855 10:38:11 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:40.855 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.856 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:40.856 10:38:11 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:40.856 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:41.119 10:38:11 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.119 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.119 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:41.120 10:38:11 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:41.120 10:38:11 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:10:41.120 10:38:11 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:10:41.120 10:38:11 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@18 -- # shift 00:10:41.120 10:38:11 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.120 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.120 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.120 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.121 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.121 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.121 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.122 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.122 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.122 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.122 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.122 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.122 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.122 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.122 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.122 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.122 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.122 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.122 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.122 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:41.122 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:41.122 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.122 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.122 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.122 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.122 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.122 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.122 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.122 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.122 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.122 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.122 10:38:11 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:41.122 10:38:11 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:41.122 10:38:11 -- nvme/functions.sh@21 -- # IFS=: 00:10:41.122 10:38:11 -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.122 10:38:11 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:10:41.122 10:38:11 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:41.122 10:38:11 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:41.122 10:38:11 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:10:41.122 10:38:11 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:41.122 10:38:11 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:41.122 10:38:11 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:41.122 10:38:11 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:10:41.122 10:38:11 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:41.122 10:38:11 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:10:41.122 10:38:11 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:41.122 10:38:11 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:10:41.122 10:38:11 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:10:41.122 10:38:11 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:41.122 10:38:11 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:10:41.122 10:38:11 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:10:41.122 10:38:11 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:10:41.122 10:38:11 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:10:41.122 10:38:11 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:41.122 10:38:11 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:41.122 10:38:11 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:41.122 10:38:11 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:41.122 10:38:11 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:41.122 10:38:11 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:41.122 10:38:11 -- nvme/functions.sh@197 -- # echo nvme1 00:10:41.122 10:38:11 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:41.122 10:38:11 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:10:41.122 10:38:11 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:10:41.122 10:38:11 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:10:41.122 10:38:11 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:10:41.122 10:38:11 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:41.122 10:38:11 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:41.122 10:38:11 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:41.122 10:38:11 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:41.122 10:38:11 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:41.122 10:38:11 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:41.122 10:38:11 -- nvme/functions.sh@197 -- # echo nvme0 00:10:41.122 10:38:11 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:41.122 10:38:11 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:10:41.122 10:38:11 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:10:41.122 10:38:11 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:10:41.122 10:38:11 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:10:41.122 10:38:11 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:41.122 10:38:11 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:41.122 10:38:11 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:41.122 10:38:11 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:41.122 10:38:11 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:41.122 10:38:11 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:41.122 10:38:11 -- nvme/functions.sh@197 -- # echo nvme3 00:10:41.122 10:38:11 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:41.122 10:38:11 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:10:41.122 10:38:11 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:10:41.122 10:38:11 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:10:41.122 10:38:11 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:10:41.122 10:38:11 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:41.122 10:38:11 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:41.122 10:38:11 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:41.122 10:38:11 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:41.122 10:38:11 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:41.122 10:38:11 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:41.122 10:38:11 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:41.122 10:38:11 -- nvme/functions.sh@197 -- # echo nvme2 00:10:41.122 10:38:11 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:10:41.122 10:38:11 -- nvme/functions.sh@206 -- # echo nvme1 00:10:41.122 10:38:11 -- nvme/functions.sh@207 -- # return 0 00:10:41.122 10:38:11 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:41.122 10:38:11 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:10:41.122 10:38:11 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:42.062 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:42.062 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.062 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.062 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.322 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.322 10:38:12 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:10:42.322 10:38:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:42.322 10:38:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.322 10:38:12 -- common/autotest_common.sh@10 -- # set +x 00:10:42.322 ************************************ 00:10:42.322 START TEST nvme_simple_copy 00:10:42.322 ************************************ 00:10:42.322 10:38:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:10:42.582 Initializing NVMe Controllers 00:10:42.582 Attaching to 0000:00:08.0 00:10:42.582 Controller supports SCC. Attached to 0000:00:08.0 00:10:42.582 Namespace ID: 1 size: 4GB 00:10:42.582 Initialization complete. 00:10:42.582 00:10:42.582 Controller QEMU NVMe Ctrl (12342 ) 00:10:42.582 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:42.582 Namespace Block Size:4096 00:10:42.582 Writing LBAs 0 to 63 with Random Data 00:10:42.582 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:42.582 LBAs matching Written Data: 64 00:10:42.582 00:10:42.582 ************************************ 00:10:42.582 END TEST nvme_simple_copy 00:10:42.582 ************************************ 00:10:42.582 real 0m0.289s 00:10:42.582 user 0m0.113s 00:10:42.582 sys 0m0.072s 00:10:42.582 10:38:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:42.582 10:38:13 -- common/autotest_common.sh@10 -- # set +x 00:10:42.582 ************************************ 00:10:42.582 END TEST nvme_scc 00:10:42.582 ************************************ 00:10:42.582 00:10:42.582 real 0m8.092s 00:10:42.582 user 0m1.157s 00:10:42.582 sys 0m1.600s 00:10:42.582 10:38:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:42.582 10:38:13 -- common/autotest_common.sh@10 -- # set +x 00:10:42.582 10:38:13 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:10:42.582 10:38:13 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:42.582 10:38:13 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:10:42.582 10:38:13 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:10:42.582 10:38:13 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:42.582 10:38:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:42.582 10:38:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.582 10:38:13 -- common/autotest_common.sh@10 -- # set +x 00:10:42.582 ************************************ 00:10:42.582 START TEST nvme_fdp 00:10:42.582 ************************************ 00:10:42.582 10:38:13 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:10:42.844 * Looking for test storage... 00:10:42.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:42.844 10:38:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:42.844 10:38:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:42.844 10:38:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:42.844 10:38:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:42.844 10:38:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:42.844 10:38:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:42.844 10:38:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:42.844 10:38:13 -- scripts/common.sh@335 -- # IFS=.-: 00:10:42.844 10:38:13 -- scripts/common.sh@335 -- # read -ra ver1 00:10:42.844 10:38:13 -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.844 10:38:13 -- scripts/common.sh@336 -- # read -ra ver2 00:10:42.844 10:38:13 -- scripts/common.sh@337 -- # local 'op=<' 00:10:42.844 10:38:13 -- scripts/common.sh@339 -- # ver1_l=2 00:10:42.844 10:38:13 -- scripts/common.sh@340 -- # ver2_l=1 00:10:42.844 10:38:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:42.844 10:38:13 -- scripts/common.sh@343 -- # case "$op" in 00:10:42.844 10:38:13 -- scripts/common.sh@344 -- # : 1 00:10:42.844 10:38:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:42.844 10:38:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.844 10:38:13 -- scripts/common.sh@364 -- # decimal 1 00:10:42.844 10:38:13 -- scripts/common.sh@352 -- # local d=1 00:10:42.844 10:38:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.844 10:38:13 -- scripts/common.sh@354 -- # echo 1 00:10:42.844 10:38:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:42.844 10:38:13 -- scripts/common.sh@365 -- # decimal 2 00:10:42.844 10:38:13 -- scripts/common.sh@352 -- # local d=2 00:10:42.844 10:38:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.844 10:38:13 -- scripts/common.sh@354 -- # echo 2 00:10:42.844 10:38:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:42.844 10:38:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:42.844 10:38:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:42.844 10:38:13 -- scripts/common.sh@367 -- # return 0 00:10:42.844 10:38:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.844 10:38:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:42.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.844 --rc genhtml_branch_coverage=1 00:10:42.844 --rc genhtml_function_coverage=1 00:10:42.844 --rc genhtml_legend=1 00:10:42.844 --rc geninfo_all_blocks=1 00:10:42.844 --rc geninfo_unexecuted_blocks=1 00:10:42.844 00:10:42.844 ' 00:10:42.844 10:38:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:42.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.844 --rc genhtml_branch_coverage=1 00:10:42.844 --rc genhtml_function_coverage=1 00:10:42.844 --rc genhtml_legend=1 00:10:42.844 --rc geninfo_all_blocks=1 00:10:42.844 --rc geninfo_unexecuted_blocks=1 00:10:42.844 00:10:42.844 ' 00:10:42.844 10:38:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:42.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.845 --rc genhtml_branch_coverage=1 00:10:42.845 --rc genhtml_function_coverage=1 00:10:42.845 --rc genhtml_legend=1 00:10:42.845 --rc geninfo_all_blocks=1 00:10:42.845 --rc geninfo_unexecuted_blocks=1 00:10:42.845 00:10:42.845 ' 00:10:42.845 10:38:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:42.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.845 --rc genhtml_branch_coverage=1 00:10:42.845 --rc genhtml_function_coverage=1 00:10:42.845 --rc genhtml_legend=1 00:10:42.845 --rc geninfo_all_blocks=1 00:10:42.845 --rc geninfo_unexecuted_blocks=1 00:10:42.845 00:10:42.845 ' 00:10:42.845 10:38:13 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:42.845 10:38:13 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:42.845 10:38:13 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:42.845 10:38:13 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:42.845 10:38:13 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:42.845 10:38:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.845 10:38:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.845 10:38:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.845 10:38:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.845 10:38:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.845 10:38:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.845 10:38:13 -- paths/export.sh@5 -- # export PATH 00:10:42.845 10:38:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.845 10:38:13 -- nvme/functions.sh@10 -- # ctrls=() 00:10:42.845 10:38:13 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:42.845 10:38:13 -- nvme/functions.sh@11 -- # nvmes=() 00:10:42.845 10:38:13 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:42.845 10:38:13 -- nvme/functions.sh@12 -- # bdfs=() 00:10:42.845 10:38:13 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:42.845 10:38:13 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:42.845 10:38:13 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:42.845 10:38:13 -- nvme/functions.sh@14 -- # nvme_name= 00:10:42.845 10:38:13 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:42.845 10:38:13 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:43.417 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:43.417 Waiting for block devices as requested 00:10:43.418 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.418 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.679 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.679 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:48.980 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:48.980 10:38:19 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:48.980 10:38:19 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:48.980 10:38:19 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:48.980 10:38:19 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:10:48.980 10:38:19 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:10:48.980 10:38:19 -- scripts/common.sh@15 -- # local i 00:10:48.980 10:38:19 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:10:48.980 10:38:19 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:48.980 10:38:19 -- scripts/common.sh@24 -- # return 0 00:10:48.980 10:38:19 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:48.980 10:38:19 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:48.980 10:38:19 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@18 -- # shift 00:10:48.980 10:38:19 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.980 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.980 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.980 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:10:48.981 10:38:19 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.981 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.981 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.982 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:48.982 10:38:19 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.982 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:48.983 10:38:19 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:48.983 10:38:19 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:48.983 10:38:19 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:10:48.983 10:38:19 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:48.983 10:38:19 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:48.983 10:38:19 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:10:48.983 10:38:19 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:10:48.983 10:38:19 -- scripts/common.sh@15 -- # local i 00:10:48.983 10:38:19 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:10:48.983 10:38:19 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:48.983 10:38:19 -- scripts/common.sh@24 -- # return 0 00:10:48.983 10:38:19 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:48.983 10:38:19 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:48.983 10:38:19 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@18 -- # shift 00:10:48.983 10:38:19 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:48.983 10:38:19 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.983 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.983 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:48.984 10:38:19 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.984 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.984 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:48.985 10:38:19 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.985 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:48.985 10:38:19 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:48.985 10:38:19 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:48.985 10:38:19 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:48.985 10:38:19 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:48.985 10:38:19 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:48.985 10:38:19 -- nvme/functions.sh@18 -- # shift 00:10:48.985 10:38:19 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.986 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:48.986 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.986 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:48.987 10:38:19 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:48.987 10:38:19 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:10:48.987 10:38:19 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:10:48.987 10:38:19 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@18 -- # shift 00:10:48.987 10:38:19 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.987 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:10:48.987 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:10:48.987 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:10:48.988 10:38:19 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:48.988 10:38:19 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:10:48.988 10:38:19 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:10:48.988 10:38:19 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@18 -- # shift 00:10:48.988 10:38:19 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.988 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:10:48.988 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:10:48.988 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.989 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:10:48.989 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.989 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:10:48.990 10:38:19 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:48.990 10:38:19 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:48.990 10:38:19 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:10:48.990 10:38:19 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:48.990 10:38:19 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:48.990 10:38:19 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:10:48.990 10:38:19 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:10:48.990 10:38:19 -- scripts/common.sh@15 -- # local i 00:10:48.990 10:38:19 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:10:48.990 10:38:19 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:48.990 10:38:19 -- scripts/common.sh@24 -- # return 0 00:10:48.990 10:38:19 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:48.990 10:38:19 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:48.990 10:38:19 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@18 -- # shift 00:10:48.990 10:38:19 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.990 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.990 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.990 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:48.991 10:38:19 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.991 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.991 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:48.992 10:38:19 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.992 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.992 10:38:19 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:48.993 10:38:19 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:48.993 10:38:19 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:48.993 10:38:19 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:48.993 10:38:19 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@18 -- # shift 00:10:48.993 10:38:19 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:48.993 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.993 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.993 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:48.994 10:38:19 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:48.994 10:38:19 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:48.994 10:38:19 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:10:48.994 10:38:19 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:48.994 10:38:19 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:48.994 10:38:19 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:10:48.994 10:38:19 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:10:48.994 10:38:19 -- scripts/common.sh@15 -- # local i 00:10:48.994 10:38:19 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:10:48.994 10:38:19 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:48.994 10:38:19 -- scripts/common.sh@24 -- # return 0 00:10:48.994 10:38:19 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:48.994 10:38:19 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:48.994 10:38:19 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@18 -- # shift 00:10:48.994 10:38:19 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.994 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:48.994 10:38:19 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.994 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:48.995 10:38:19 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.995 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.995 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.996 10:38:19 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:48.996 10:38:19 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.996 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.997 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.997 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.997 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.997 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.997 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.997 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.997 10:38:19 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.997 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:48.997 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.997 10:38:19 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:48.997 10:38:19 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:49.260 10:38:19 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:49.260 10:38:19 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:10:49.260 10:38:19 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:10:49.260 10:38:19 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@18 -- # shift 00:10:49.260 10:38:19 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.260 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.260 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:10:49.260 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:49.261 10:38:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # IFS=: 00:10:49.261 10:38:19 -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.261 10:38:19 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:10:49.261 10:38:19 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:49.261 10:38:19 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:49.261 10:38:19 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:10:49.261 10:38:19 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:49.261 10:38:19 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:49.261 10:38:19 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:49.261 10:38:19 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:10:49.261 10:38:19 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:49.261 10:38:19 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:10:49.261 10:38:19 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:49.261 10:38:19 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:10:49.261 10:38:19 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:10:49.261 10:38:19 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:49.261 10:38:19 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:10:49.261 10:38:19 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:10:49.261 10:38:19 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:10:49.261 10:38:19 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:10:49.261 10:38:19 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:49.261 10:38:19 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:49.261 10:38:19 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:49.261 10:38:19 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@76 -- # echo 0x8000 00:10:49.261 10:38:19 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:49.261 10:38:19 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:49.261 10:38:19 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:49.261 10:38:19 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:10:49.261 10:38:19 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:10:49.261 10:38:19 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:10:49.261 10:38:19 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:10:49.261 10:38:19 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:49.261 10:38:19 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:49.261 10:38:19 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:49.261 10:38:19 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@76 -- # echo 0x88010 00:10:49.261 10:38:19 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:10:49.261 10:38:19 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:49.261 10:38:19 -- nvme/functions.sh@197 -- # echo nvme0 00:10:49.261 10:38:19 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:49.261 10:38:19 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:10:49.261 10:38:19 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:10:49.261 10:38:19 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:10:49.261 10:38:19 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:10:49.261 10:38:19 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:49.261 10:38:19 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:49.261 10:38:19 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:49.261 10:38:19 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@76 -- # echo 0x8000 00:10:49.261 10:38:19 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:49.261 10:38:19 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:49.261 10:38:19 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:49.261 10:38:19 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:10:49.261 10:38:19 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:10:49.261 10:38:19 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:10:49.261 10:38:19 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:10:49.261 10:38:19 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:49.261 10:38:19 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:49.261 10:38:19 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:49.261 10:38:19 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:49.261 10:38:19 -- nvme/functions.sh@76 -- # echo 0x8000 00:10:49.261 10:38:19 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:49.261 10:38:19 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:49.261 10:38:19 -- nvme/functions.sh@204 -- # trap - ERR 00:10:49.261 10:38:19 -- nvme/functions.sh@204 -- # print_backtrace 00:10:49.261 10:38:19 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:10:49.261 10:38:19 -- common/autotest_common.sh@1142 -- # return 0 00:10:49.261 10:38:19 -- nvme/functions.sh@204 -- # trap - ERR 00:10:49.261 10:38:19 -- nvme/functions.sh@204 -- # print_backtrace 00:10:49.261 10:38:19 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:10:49.261 10:38:19 -- common/autotest_common.sh@1142 -- # return 0 00:10:49.261 10:38:19 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:10:49.262 10:38:19 -- nvme/functions.sh@206 -- # echo nvme0 00:10:49.262 10:38:19 -- nvme/functions.sh@207 -- # return 0 00:10:49.262 10:38:19 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:10:49.262 10:38:19 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:10:49.262 10:38:19 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:50.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:50.204 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:50.204 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:50.204 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:50.204 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:50.466 10:38:20 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:10:50.466 10:38:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:50.466 10:38:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:50.466 10:38:20 -- common/autotest_common.sh@10 -- # set +x 00:10:50.466 ************************************ 00:10:50.466 START TEST nvme_flexible_data_placement 00:10:50.466 ************************************ 00:10:50.466 10:38:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:10:50.728 Initializing NVMe Controllers 00:10:50.728 Attaching to 0000:00:09.0 00:10:50.728 Controller supports FDP Attached to 0000:00:09.0 00:10:50.728 Namespace ID: 1 Endurance Group ID: 1 00:10:50.728 Initialization complete. 00:10:50.728 00:10:50.728 ================================== 00:10:50.728 == FDP tests for Namespace: #01 == 00:10:50.728 ================================== 00:10:50.728 00:10:50.728 Get Feature: FDP: 00:10:50.728 ================= 00:10:50.728 Enabled: Yes 00:10:50.728 FDP configuration Index: 0 00:10:50.728 00:10:50.728 FDP configurations log page 00:10:50.728 =========================== 00:10:50.728 Number of FDP configurations: 1 00:10:50.728 Version: 0 00:10:50.728 Size: 112 00:10:50.728 FDP Configuration Descriptor: 0 00:10:50.728 Descriptor Size: 96 00:10:50.728 Reclaim Group Identifier format: 2 00:10:50.728 FDP Volatile Write Cache: Not Present 00:10:50.728 FDP Configuration: Valid 00:10:50.728 Vendor Specific Size: 0 00:10:50.728 Number of Reclaim Groups: 2 00:10:50.728 Number of Recalim Unit Handles: 8 00:10:50.728 Max Placement Identifiers: 128 00:10:50.728 Number of Namespaces Suppprted: 256 00:10:50.728 Reclaim unit Nominal Size: 6000000 bytes 00:10:50.728 Estimated Reclaim Unit Time Limit: Not Reported 00:10:50.728 RUH Desc #000: RUH Type: Initially Isolated 00:10:50.728 RUH Desc #001: RUH Type: Initially Isolated 00:10:50.728 RUH Desc #002: RUH Type: Initially Isolated 00:10:50.728 RUH Desc #003: RUH Type: Initially Isolated 00:10:50.728 RUH Desc #004: RUH Type: Initially Isolated 00:10:50.728 RUH Desc #005: RUH Type: Initially Isolated 00:10:50.728 RUH Desc #006: RUH Type: Initially Isolated 00:10:50.728 RUH Desc #007: RUH Type: Initially Isolated 00:10:50.728 00:10:50.728 FDP reclaim unit handle usage log page 00:10:50.728 ====================================== 00:10:50.728 Number of Reclaim Unit Handles: 8 00:10:50.728 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:50.728 RUH Usage Desc #001: RUH Attributes: Unused 00:10:50.728 RUH Usage Desc #002: RUH Attributes: Unused 00:10:50.728 RUH Usage Desc #003: RUH Attributes: Unused 00:10:50.728 RUH Usage Desc #004: RUH Attributes: Unused 00:10:50.728 RUH Usage Desc #005: RUH Attributes: Unused 00:10:50.728 RUH Usage Desc #006: RUH Attributes: Unused 00:10:50.728 RUH Usage Desc #007: RUH Attributes: Unused 00:10:50.728 00:10:50.728 FDP statistics log page 00:10:50.728 ======================= 00:10:50.728 Host bytes with metadata written: 888270848 00:10:50.728 Media bytes with metadata written: 888410112 00:10:50.728 Media bytes erased: 0 00:10:50.728 00:10:50.728 FDP Reclaim unit handle status 00:10:50.728 ============================== 00:10:50.728 Number of RUHS descriptors: 2 00:10:50.728 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000010e1 00:10:50.728 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:50.728 00:10:50.728 FDP write on placement id: 0 success 00:10:50.728 00:10:50.728 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:50.728 00:10:50.728 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:50.728 00:10:50.728 Get Feature: FDP Events for Placement handle: #0 00:10:50.728 ======================== 00:10:50.728 Number of FDP Events: 6 00:10:50.728 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:50.728 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:50.728 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:50.728 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:50.728 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:50.728 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:50.728 00:10:50.728 FDP events log page 00:10:50.728 =================== 00:10:50.728 Number of FDP events: 1 00:10:50.728 FDP Event #0: 00:10:50.728 Event Type: RU Not Written to Capacity 00:10:50.728 Placement Identifier: Valid 00:10:50.728 NSID: Valid 00:10:50.728 Location: Valid 00:10:50.728 Placement Identifier: 0 00:10:50.728 Event Timestamp: c 00:10:50.728 Namespace Identifier: 1 00:10:50.728 Reclaim Group Identifier: 0 00:10:50.728 Reclaim Unit Handle Identifier: 0 00:10:50.728 00:10:50.728 FDP test passed 00:10:50.728 00:10:50.728 real 0m0.243s 00:10:50.728 user 0m0.067s 00:10:50.728 sys 0m0.074s 00:10:50.728 10:38:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:50.728 10:38:21 -- common/autotest_common.sh@10 -- # set +x 00:10:50.728 ************************************ 00:10:50.728 END TEST nvme_flexible_data_placement 00:10:50.728 ************************************ 00:10:50.728 ************************************ 00:10:50.728 END TEST nvme_fdp 00:10:50.728 ************************************ 00:10:50.728 00:10:50.728 real 0m8.018s 00:10:50.728 user 0m1.131s 00:10:50.728 sys 0m1.645s 00:10:50.728 10:38:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:50.728 10:38:21 -- common/autotest_common.sh@10 -- # set +x 00:10:50.728 10:38:21 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:10:50.728 10:38:21 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:50.728 10:38:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:50.728 10:38:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:50.728 10:38:21 -- common/autotest_common.sh@10 -- # set +x 00:10:50.728 ************************************ 00:10:50.728 START TEST nvme_rpc 00:10:50.728 ************************************ 00:10:50.728 10:38:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:50.728 * Looking for test storage... 00:10:50.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:50.989 10:38:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:50.989 10:38:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:50.989 10:38:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:50.989 10:38:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:50.989 10:38:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:50.989 10:38:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:50.989 10:38:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:50.989 10:38:21 -- scripts/common.sh@335 -- # IFS=.-: 00:10:50.990 10:38:21 -- scripts/common.sh@335 -- # read -ra ver1 00:10:50.990 10:38:21 -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.990 10:38:21 -- scripts/common.sh@336 -- # read -ra ver2 00:10:50.990 10:38:21 -- scripts/common.sh@337 -- # local 'op=<' 00:10:50.990 10:38:21 -- scripts/common.sh@339 -- # ver1_l=2 00:10:50.990 10:38:21 -- scripts/common.sh@340 -- # ver2_l=1 00:10:50.990 10:38:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:50.990 10:38:21 -- scripts/common.sh@343 -- # case "$op" in 00:10:50.990 10:38:21 -- scripts/common.sh@344 -- # : 1 00:10:50.990 10:38:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:50.990 10:38:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.990 10:38:21 -- scripts/common.sh@364 -- # decimal 1 00:10:50.990 10:38:21 -- scripts/common.sh@352 -- # local d=1 00:10:50.990 10:38:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.990 10:38:21 -- scripts/common.sh@354 -- # echo 1 00:10:50.990 10:38:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:50.990 10:38:21 -- scripts/common.sh@365 -- # decimal 2 00:10:50.990 10:38:21 -- scripts/common.sh@352 -- # local d=2 00:10:50.990 10:38:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.990 10:38:21 -- scripts/common.sh@354 -- # echo 2 00:10:50.990 10:38:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:50.990 10:38:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:50.990 10:38:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:50.990 10:38:21 -- scripts/common.sh@367 -- # return 0 00:10:50.990 10:38:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.990 10:38:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:50.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.990 --rc genhtml_branch_coverage=1 00:10:50.990 --rc genhtml_function_coverage=1 00:10:50.990 --rc genhtml_legend=1 00:10:50.990 --rc geninfo_all_blocks=1 00:10:50.990 --rc geninfo_unexecuted_blocks=1 00:10:50.990 00:10:50.990 ' 00:10:50.990 10:38:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:50.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.990 --rc genhtml_branch_coverage=1 00:10:50.990 --rc genhtml_function_coverage=1 00:10:50.990 --rc genhtml_legend=1 00:10:50.990 --rc geninfo_all_blocks=1 00:10:50.990 --rc geninfo_unexecuted_blocks=1 00:10:50.990 00:10:50.990 ' 00:10:50.990 10:38:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:50.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.990 --rc genhtml_branch_coverage=1 00:10:50.990 --rc genhtml_function_coverage=1 00:10:50.990 --rc genhtml_legend=1 00:10:50.990 --rc geninfo_all_blocks=1 00:10:50.990 --rc geninfo_unexecuted_blocks=1 00:10:50.990 00:10:50.990 ' 00:10:50.990 10:38:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:50.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.990 --rc genhtml_branch_coverage=1 00:10:50.990 --rc genhtml_function_coverage=1 00:10:50.990 --rc genhtml_legend=1 00:10:50.990 --rc geninfo_all_blocks=1 00:10:50.990 --rc geninfo_unexecuted_blocks=1 00:10:50.990 00:10:50.990 ' 00:10:50.990 10:38:21 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:50.990 10:38:21 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:50.990 10:38:21 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:50.990 10:38:21 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:50.990 10:38:21 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:50.990 10:38:21 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:50.990 10:38:21 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:50.990 10:38:21 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:50.990 10:38:21 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:50.990 10:38:21 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:50.990 10:38:21 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:50.990 10:38:21 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:50.990 10:38:21 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:50.990 10:38:21 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:50.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.990 10:38:21 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:10:50.990 10:38:21 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66614 00:10:50.990 10:38:21 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:50.990 10:38:21 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:50.990 10:38:21 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66614 00:10:50.990 10:38:21 -- common/autotest_common.sh@829 -- # '[' -z 66614 ']' 00:10:50.990 10:38:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.990 10:38:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.990 10:38:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.990 10:38:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.990 10:38:21 -- common/autotest_common.sh@10 -- # set +x 00:10:50.990 [2024-12-03 10:38:21.575986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:50.990 [2024-12-03 10:38:21.576319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66614 ] 00:10:51.251 [2024-12-03 10:38:21.730114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:51.512 [2024-12-03 10:38:21.999433] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:51.512 [2024-12-03 10:38:22.000014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.512 [2024-12-03 10:38:21.999975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.896 10:38:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.896 10:38:23 -- common/autotest_common.sh@862 -- # return 0 00:10:52.896 10:38:23 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:10:52.896 Nvme0n1 00:10:52.896 10:38:23 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:52.896 10:38:23 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:52.896 request: 00:10:52.896 { 00:10:52.896 "filename": "non_existing_file", 00:10:52.896 "bdev_name": "Nvme0n1", 00:10:52.896 "method": "bdev_nvme_apply_firmware", 00:10:52.896 "req_id": 1 00:10:52.896 } 00:10:52.896 Got JSON-RPC error response 00:10:52.896 response: 00:10:52.896 { 00:10:52.896 "code": -32603, 00:10:52.896 "message": "open file failed." 00:10:52.896 } 00:10:53.154 10:38:23 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:53.154 10:38:23 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:53.154 10:38:23 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:53.155 10:38:23 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:53.155 10:38:23 -- nvme/nvme_rpc.sh@40 -- # killprocess 66614 00:10:53.155 10:38:23 -- common/autotest_common.sh@936 -- # '[' -z 66614 ']' 00:10:53.155 10:38:23 -- common/autotest_common.sh@940 -- # kill -0 66614 00:10:53.155 10:38:23 -- common/autotest_common.sh@941 -- # uname 00:10:53.155 10:38:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:53.155 10:38:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66614 00:10:53.155 killing process with pid 66614 00:10:53.155 10:38:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:53.155 10:38:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:53.155 10:38:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66614' 00:10:53.155 10:38:23 -- common/autotest_common.sh@955 -- # kill 66614 00:10:53.155 10:38:23 -- common/autotest_common.sh@960 -- # wait 66614 00:10:54.529 ************************************ 00:10:54.529 END TEST nvme_rpc 00:10:54.529 ************************************ 00:10:54.529 00:10:54.529 real 0m3.667s 00:10:54.529 user 0m6.649s 00:10:54.529 sys 0m0.730s 00:10:54.529 10:38:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:54.529 10:38:24 -- common/autotest_common.sh@10 -- # set +x 00:10:54.529 10:38:24 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:54.529 10:38:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:54.529 10:38:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:54.529 10:38:24 -- common/autotest_common.sh@10 -- # set +x 00:10:54.529 ************************************ 00:10:54.529 START TEST nvme_rpc_timeouts 00:10:54.529 ************************************ 00:10:54.529 10:38:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:54.529 * Looking for test storage... 00:10:54.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:54.529 10:38:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:54.529 10:38:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:54.529 10:38:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:54.529 10:38:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:54.529 10:38:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:54.529 10:38:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:54.529 10:38:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:54.529 10:38:25 -- scripts/common.sh@335 -- # IFS=.-: 00:10:54.529 10:38:25 -- scripts/common.sh@335 -- # read -ra ver1 00:10:54.529 10:38:25 -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.529 10:38:25 -- scripts/common.sh@336 -- # read -ra ver2 00:10:54.529 10:38:25 -- scripts/common.sh@337 -- # local 'op=<' 00:10:54.529 10:38:25 -- scripts/common.sh@339 -- # ver1_l=2 00:10:54.529 10:38:25 -- scripts/common.sh@340 -- # ver2_l=1 00:10:54.529 10:38:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:54.529 10:38:25 -- scripts/common.sh@343 -- # case "$op" in 00:10:54.529 10:38:25 -- scripts/common.sh@344 -- # : 1 00:10:54.529 10:38:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:54.529 10:38:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.529 10:38:25 -- scripts/common.sh@364 -- # decimal 1 00:10:54.529 10:38:25 -- scripts/common.sh@352 -- # local d=1 00:10:54.529 10:38:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.529 10:38:25 -- scripts/common.sh@354 -- # echo 1 00:10:54.529 10:38:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:54.529 10:38:25 -- scripts/common.sh@365 -- # decimal 2 00:10:54.529 10:38:25 -- scripts/common.sh@352 -- # local d=2 00:10:54.529 10:38:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.529 10:38:25 -- scripts/common.sh@354 -- # echo 2 00:10:54.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.786 10:38:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:54.786 10:38:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:54.786 10:38:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:54.786 10:38:25 -- scripts/common.sh@367 -- # return 0 00:10:54.786 10:38:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.786 10:38:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:54.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.786 --rc genhtml_branch_coverage=1 00:10:54.786 --rc genhtml_function_coverage=1 00:10:54.786 --rc genhtml_legend=1 00:10:54.786 --rc geninfo_all_blocks=1 00:10:54.786 --rc geninfo_unexecuted_blocks=1 00:10:54.786 00:10:54.786 ' 00:10:54.786 10:38:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:54.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.786 --rc genhtml_branch_coverage=1 00:10:54.786 --rc genhtml_function_coverage=1 00:10:54.786 --rc genhtml_legend=1 00:10:54.786 --rc geninfo_all_blocks=1 00:10:54.786 --rc geninfo_unexecuted_blocks=1 00:10:54.786 00:10:54.786 ' 00:10:54.786 10:38:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:54.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.786 --rc genhtml_branch_coverage=1 00:10:54.786 --rc genhtml_function_coverage=1 00:10:54.786 --rc genhtml_legend=1 00:10:54.786 --rc geninfo_all_blocks=1 00:10:54.786 --rc geninfo_unexecuted_blocks=1 00:10:54.786 00:10:54.786 ' 00:10:54.786 10:38:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:54.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.786 --rc genhtml_branch_coverage=1 00:10:54.786 --rc genhtml_function_coverage=1 00:10:54.786 --rc genhtml_legend=1 00:10:54.786 --rc geninfo_all_blocks=1 00:10:54.786 --rc geninfo_unexecuted_blocks=1 00:10:54.786 00:10:54.786 ' 00:10:54.786 10:38:25 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:54.786 10:38:25 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66686 00:10:54.786 10:38:25 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66686 00:10:54.786 10:38:25 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66724 00:10:54.786 10:38:25 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:54.786 10:38:25 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66724 00:10:54.786 10:38:25 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:54.786 10:38:25 -- common/autotest_common.sh@829 -- # '[' -z 66724 ']' 00:10:54.786 10:38:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.786 10:38:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.786 10:38:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.786 10:38:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.786 10:38:25 -- common/autotest_common.sh@10 -- # set +x 00:10:54.786 [2024-12-03 10:38:25.204134] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:54.786 [2024-12-03 10:38:25.204225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66724 ] 00:10:54.786 [2024-12-03 10:38:25.346990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:55.044 [2024-12-03 10:38:25.521177] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:55.044 [2024-12-03 10:38:25.521570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.044 [2024-12-03 10:38:25.521587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.610 10:38:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.610 Checking default timeout settings: 00:10:55.610 10:38:26 -- common/autotest_common.sh@862 -- # return 0 00:10:55.610 10:38:26 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:55.610 10:38:26 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:55.868 Making settings changes with rpc: 00:10:55.868 10:38:26 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:55.868 10:38:26 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:56.126 Check default vs. modified settings: 00:10:56.126 10:38:26 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:56.126 10:38:26 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66686 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66686 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:56.385 Setting action_on_timeout is changed as expected. 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66686 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66686 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:56.385 Setting timeout_us is changed as expected. 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66686 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66686 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:56.385 Setting timeout_admin_us is changed as expected. 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66686 /tmp/settings_modified_66686 00:10:56.385 10:38:26 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66724 00:10:56.385 10:38:26 -- common/autotest_common.sh@936 -- # '[' -z 66724 ']' 00:10:56.385 10:38:26 -- common/autotest_common.sh@940 -- # kill -0 66724 00:10:56.385 10:38:26 -- common/autotest_common.sh@941 -- # uname 00:10:56.385 10:38:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:56.385 10:38:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66724 00:10:56.385 killing process with pid 66724 00:10:56.385 10:38:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:56.385 10:38:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:56.385 10:38:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66724' 00:10:56.385 10:38:26 -- common/autotest_common.sh@955 -- # kill 66724 00:10:56.385 10:38:26 -- common/autotest_common.sh@960 -- # wait 66724 00:10:57.820 RPC TIMEOUT SETTING TEST PASSED. 00:10:57.820 10:38:28 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:57.820 ************************************ 00:10:57.820 END TEST nvme_rpc_timeouts 00:10:57.820 ************************************ 00:10:57.820 00:10:57.820 real 0m3.129s 00:10:57.820 user 0m5.853s 00:10:57.820 sys 0m0.509s 00:10:57.820 10:38:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:57.820 10:38:28 -- common/autotest_common.sh@10 -- # set +x 00:10:57.820 10:38:28 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:10:57.820 10:38:28 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:10:57.820 10:38:28 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:10:57.820 10:38:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:57.820 10:38:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:57.820 10:38:28 -- common/autotest_common.sh@10 -- # set +x 00:10:57.820 ************************************ 00:10:57.820 START TEST nvme_xnvme 00:10:57.820 ************************************ 00:10:57.820 10:38:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:10:57.820 * Looking for test storage... 00:10:57.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:10:57.820 10:38:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:57.820 10:38:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:57.820 10:38:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:57.820 10:38:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:57.820 10:38:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:57.820 10:38:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:57.820 10:38:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:57.820 10:38:28 -- scripts/common.sh@335 -- # IFS=.-: 00:10:57.820 10:38:28 -- scripts/common.sh@335 -- # read -ra ver1 00:10:57.820 10:38:28 -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.820 10:38:28 -- scripts/common.sh@336 -- # read -ra ver2 00:10:57.820 10:38:28 -- scripts/common.sh@337 -- # local 'op=<' 00:10:57.820 10:38:28 -- scripts/common.sh@339 -- # ver1_l=2 00:10:57.820 10:38:28 -- scripts/common.sh@340 -- # ver2_l=1 00:10:57.820 10:38:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:57.820 10:38:28 -- scripts/common.sh@343 -- # case "$op" in 00:10:57.820 10:38:28 -- scripts/common.sh@344 -- # : 1 00:10:57.820 10:38:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:57.820 10:38:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.820 10:38:28 -- scripts/common.sh@364 -- # decimal 1 00:10:57.820 10:38:28 -- scripts/common.sh@352 -- # local d=1 00:10:57.820 10:38:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.820 10:38:28 -- scripts/common.sh@354 -- # echo 1 00:10:57.820 10:38:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:57.820 10:38:28 -- scripts/common.sh@365 -- # decimal 2 00:10:57.821 10:38:28 -- scripts/common.sh@352 -- # local d=2 00:10:57.821 10:38:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.821 10:38:28 -- scripts/common.sh@354 -- # echo 2 00:10:57.821 10:38:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:57.821 10:38:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:57.821 10:38:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:57.821 10:38:28 -- scripts/common.sh@367 -- # return 0 00:10:57.821 10:38:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.821 10:38:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:57.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.821 --rc genhtml_branch_coverage=1 00:10:57.821 --rc genhtml_function_coverage=1 00:10:57.821 --rc genhtml_legend=1 00:10:57.821 --rc geninfo_all_blocks=1 00:10:57.821 --rc geninfo_unexecuted_blocks=1 00:10:57.821 00:10:57.821 ' 00:10:57.821 10:38:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:57.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.821 --rc genhtml_branch_coverage=1 00:10:57.821 --rc genhtml_function_coverage=1 00:10:57.821 --rc genhtml_legend=1 00:10:57.821 --rc geninfo_all_blocks=1 00:10:57.821 --rc geninfo_unexecuted_blocks=1 00:10:57.821 00:10:57.821 ' 00:10:57.821 10:38:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:57.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.821 --rc genhtml_branch_coverage=1 00:10:57.821 --rc genhtml_function_coverage=1 00:10:57.821 --rc genhtml_legend=1 00:10:57.821 --rc geninfo_all_blocks=1 00:10:57.821 --rc geninfo_unexecuted_blocks=1 00:10:57.821 00:10:57.821 ' 00:10:57.821 10:38:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:57.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.821 --rc genhtml_branch_coverage=1 00:10:57.821 --rc genhtml_function_coverage=1 00:10:57.821 --rc genhtml_legend=1 00:10:57.821 --rc geninfo_all_blocks=1 00:10:57.821 --rc geninfo_unexecuted_blocks=1 00:10:57.821 00:10:57.821 ' 00:10:57.821 10:38:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.821 10:38:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.821 10:38:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.821 10:38:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.821 10:38:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.821 10:38:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.821 10:38:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.821 10:38:28 -- paths/export.sh@5 -- # export PATH 00:10:57.821 10:38:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:10:57.821 10:38:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:57.821 10:38:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:57.821 10:38:28 -- common/autotest_common.sh@10 -- # set +x 00:10:57.821 ************************************ 00:10:57.821 START TEST xnvme_to_malloc_dd_copy 00:10:57.821 ************************************ 00:10:57.821 10:38:28 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:10:57.821 10:38:28 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:10:57.821 10:38:28 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:10:57.821 10:38:28 -- dd/common.sh@191 -- # return 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@18 -- # local io 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:10:57.821 10:38:28 -- xnvme/xnvme.sh@42 -- # gen_conf 00:10:57.821 10:38:28 -- dd/common.sh@31 -- # xtrace_disable 00:10:57.821 10:38:28 -- common/autotest_common.sh@10 -- # set +x 00:10:57.821 { 00:10:57.821 "subsystems": [ 00:10:57.821 { 00:10:57.821 "subsystem": "bdev", 00:10:57.821 "config": [ 00:10:57.821 { 00:10:57.821 "params": { 00:10:57.821 "block_size": 512, 00:10:57.821 "num_blocks": 2097152, 00:10:57.821 "name": "malloc0" 00:10:57.821 }, 00:10:57.821 "method": "bdev_malloc_create" 00:10:57.821 }, 00:10:57.821 { 00:10:57.821 "params": { 00:10:57.821 "io_mechanism": "libaio", 00:10:57.821 "filename": "/dev/nullb0", 00:10:57.821 "name": "null0" 00:10:57.821 }, 00:10:57.821 "method": "bdev_xnvme_create" 00:10:57.821 }, 00:10:57.821 { 00:10:57.821 "method": "bdev_wait_for_examine" 00:10:57.821 } 00:10:57.821 ] 00:10:57.821 } 00:10:57.821 ] 00:10:57.821 } 00:10:57.821 [2024-12-03 10:38:28.410554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:57.822 [2024-12-03 10:38:28.410777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66857 ] 00:10:58.081 [2024-12-03 10:38:28.561736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.340 [2024-12-03 10:38:28.846848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.890  [2024-12-03T10:38:32.446Z] Copying: 227/1024 [MB] (227 MBps) [2024-12-03T10:38:33.381Z] Copying: 455/1024 [MB] (228 MBps) [2024-12-03T10:38:34.316Z] Copying: 760/1024 [MB] (304 MBps) [2024-12-03T10:38:36.222Z] Copying: 1024/1024 [MB] (average 264 MBps) 00:11:05.609 00:11:05.609 10:38:36 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:05.609 10:38:36 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:05.609 10:38:36 -- dd/common.sh@31 -- # xtrace_disable 00:11:05.609 10:38:36 -- common/autotest_common.sh@10 -- # set +x 00:11:05.609 { 00:11:05.609 "subsystems": [ 00:11:05.609 { 00:11:05.609 "subsystem": "bdev", 00:11:05.609 "config": [ 00:11:05.609 { 00:11:05.609 "params": { 00:11:05.609 "block_size": 512, 00:11:05.609 "num_blocks": 2097152, 00:11:05.609 "name": "malloc0" 00:11:05.609 }, 00:11:05.609 "method": "bdev_malloc_create" 00:11:05.609 }, 00:11:05.609 { 00:11:05.609 "params": { 00:11:05.609 "io_mechanism": "libaio", 00:11:05.609 "filename": "/dev/nullb0", 00:11:05.609 "name": "null0" 00:11:05.609 }, 00:11:05.609 "method": "bdev_xnvme_create" 00:11:05.609 }, 00:11:05.609 { 00:11:05.609 "method": "bdev_wait_for_examine" 00:11:05.609 } 00:11:05.609 ] 00:11:05.609 } 00:11:05.609 ] 00:11:05.609 } 00:11:05.609 [2024-12-03 10:38:36.167455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:05.609 [2024-12-03 10:38:36.168182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66944 ] 00:11:05.867 [2024-12-03 10:38:36.314133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.126 [2024-12-03 10:38:36.487513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.026  [2024-12-03T10:38:39.572Z] Copying: 308/1024 [MB] (308 MBps) [2024-12-03T10:38:40.504Z] Copying: 618/1024 [MB] (309 MBps) [2024-12-03T10:38:40.761Z] Copying: 927/1024 [MB] (309 MBps) [2024-12-03T10:38:43.359Z] Copying: 1024/1024 [MB] (average 309 MBps) 00:11:12.746 00:11:12.746 10:38:42 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:12.746 10:38:42 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:12.746 10:38:42 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:12.746 10:38:42 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:12.746 10:38:42 -- dd/common.sh@31 -- # xtrace_disable 00:11:12.746 10:38:42 -- common/autotest_common.sh@10 -- # set +x 00:11:12.746 { 00:11:12.746 "subsystems": [ 00:11:12.746 { 00:11:12.746 "subsystem": "bdev", 00:11:12.746 "config": [ 00:11:12.746 { 00:11:12.746 "params": { 00:11:12.746 "block_size": 512, 00:11:12.746 "num_blocks": 2097152, 00:11:12.746 "name": "malloc0" 00:11:12.746 }, 00:11:12.746 "method": "bdev_malloc_create" 00:11:12.746 }, 00:11:12.746 { 00:11:12.746 "params": { 00:11:12.746 "io_mechanism": "io_uring", 00:11:12.746 "filename": "/dev/nullb0", 00:11:12.746 "name": "null0" 00:11:12.746 }, 00:11:12.746 "method": "bdev_xnvme_create" 00:11:12.746 }, 00:11:12.746 { 00:11:12.746 "method": "bdev_wait_for_examine" 00:11:12.746 } 00:11:12.746 ] 00:11:12.746 } 00:11:12.746 ] 00:11:12.746 } 00:11:12.746 [2024-12-03 10:38:42.818735] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:12.746 [2024-12-03 10:38:42.818841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67026 ] 00:11:12.746 [2024-12-03 10:38:42.966575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.746 [2024-12-03 10:38:43.138843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.648  [2024-12-03T10:38:46.194Z] Copying: 315/1024 [MB] (315 MBps) [2024-12-03T10:38:47.130Z] Copying: 630/1024 [MB] (315 MBps) [2024-12-03T10:38:47.389Z] Copying: 946/1024 [MB] (315 MBps) [2024-12-03T10:38:49.923Z] Copying: 1024/1024 [MB] (average 315 MBps) 00:11:19.310 00:11:19.310 10:38:49 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:19.310 10:38:49 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:19.310 10:38:49 -- dd/common.sh@31 -- # xtrace_disable 00:11:19.310 10:38:49 -- common/autotest_common.sh@10 -- # set +x 00:11:19.310 { 00:11:19.310 "subsystems": [ 00:11:19.310 { 00:11:19.310 "subsystem": "bdev", 00:11:19.310 "config": [ 00:11:19.310 { 00:11:19.310 "params": { 00:11:19.310 "block_size": 512, 00:11:19.310 "num_blocks": 2097152, 00:11:19.310 "name": "malloc0" 00:11:19.310 }, 00:11:19.310 "method": "bdev_malloc_create" 00:11:19.310 }, 00:11:19.310 { 00:11:19.310 "params": { 00:11:19.310 "io_mechanism": "io_uring", 00:11:19.310 "filename": "/dev/nullb0", 00:11:19.310 "name": "null0" 00:11:19.310 }, 00:11:19.310 "method": "bdev_xnvme_create" 00:11:19.310 }, 00:11:19.310 { 00:11:19.310 "method": "bdev_wait_for_examine" 00:11:19.310 } 00:11:19.310 ] 00:11:19.310 } 00:11:19.310 ] 00:11:19.310 } 00:11:19.310 [2024-12-03 10:38:49.396665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:19.310 [2024-12-03 10:38:49.396775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67102 ] 00:11:19.310 [2024-12-03 10:38:49.546738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.310 [2024-12-03 10:38:49.731622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.210  [2024-12-03T10:38:52.757Z] Copying: 318/1024 [MB] (318 MBps) [2024-12-03T10:38:53.692Z] Copying: 637/1024 [MB] (319 MBps) [2024-12-03T10:38:53.950Z] Copying: 957/1024 [MB] (319 MBps) [2024-12-03T10:38:56.478Z] Copying: 1024/1024 [MB] (average 319 MBps) 00:11:25.865 00:11:25.865 10:38:55 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:11:25.865 10:38:55 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:25.865 ************************************ 00:11:25.865 END TEST xnvme_to_malloc_dd_copy 00:11:25.865 ************************************ 00:11:25.865 00:11:25.865 real 0m27.612s 00:11:25.865 user 0m23.871s 00:11:25.865 sys 0m3.172s 00:11:25.865 10:38:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:25.865 10:38:55 -- common/autotest_common.sh@10 -- # set +x 00:11:25.865 10:38:55 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:25.865 10:38:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:25.865 10:38:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:25.865 10:38:55 -- common/autotest_common.sh@10 -- # set +x 00:11:25.865 ************************************ 00:11:25.865 START TEST xnvme_bdevperf 00:11:25.865 ************************************ 00:11:25.865 10:38:56 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:11:25.865 10:38:56 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:11:25.865 10:38:56 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:25.865 10:38:56 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:25.865 10:38:56 -- dd/common.sh@191 -- # return 00:11:25.865 10:38:56 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:11:25.866 10:38:56 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:25.866 10:38:56 -- xnvme/xnvme.sh@60 -- # local io 00:11:25.866 10:38:56 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:11:25.866 10:38:56 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:11:25.866 10:38:56 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:11:25.866 10:38:56 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:11:25.866 10:38:56 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:11:25.866 10:38:56 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:25.866 10:38:56 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:25.866 10:38:56 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:25.866 10:38:56 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:25.866 10:38:56 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:25.866 10:38:56 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:25.866 10:38:56 -- dd/common.sh@31 -- # xtrace_disable 00:11:25.866 10:38:56 -- common/autotest_common.sh@10 -- # set +x 00:11:25.866 { 00:11:25.866 "subsystems": [ 00:11:25.866 { 00:11:25.866 "subsystem": "bdev", 00:11:25.866 "config": [ 00:11:25.866 { 00:11:25.866 "params": { 00:11:25.866 "io_mechanism": "libaio", 00:11:25.866 "filename": "/dev/nullb0", 00:11:25.866 "name": "null0" 00:11:25.866 }, 00:11:25.866 "method": "bdev_xnvme_create" 00:11:25.866 }, 00:11:25.866 { 00:11:25.866 "method": "bdev_wait_for_examine" 00:11:25.866 } 00:11:25.866 ] 00:11:25.866 } 00:11:25.866 ] 00:11:25.866 } 00:11:25.866 [2024-12-03 10:38:56.089811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:25.866 [2024-12-03 10:38:56.089917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67212 ] 00:11:25.866 [2024-12-03 10:38:56.235639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.866 [2024-12-03 10:38:56.408917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.123 Running I/O for 5 seconds... 00:11:31.388 00:11:31.388 Latency(us) 00:11:31.388 [2024-12-03T10:39:02.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.388 [2024-12-03T10:39:02.001Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:31.388 null0 : 5.00 208696.68 815.22 0.00 0.00 304.50 115.79 1159.48 00:11:31.388 [2024-12-03T10:39:02.001Z] =================================================================================================================== 00:11:31.388 [2024-12-03T10:39:02.001Z] Total : 208696.68 815.22 0.00 0.00 304.50 115.79 1159.48 00:11:31.955 10:39:02 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:31.955 10:39:02 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:31.955 10:39:02 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:31.955 10:39:02 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:31.955 10:39:02 -- dd/common.sh@31 -- # xtrace_disable 00:11:31.955 10:39:02 -- common/autotest_common.sh@10 -- # set +x 00:11:31.955 { 00:11:31.955 "subsystems": [ 00:11:31.955 { 00:11:31.955 "subsystem": "bdev", 00:11:31.955 "config": [ 00:11:31.955 { 00:11:31.955 "params": { 00:11:31.955 "io_mechanism": "io_uring", 00:11:31.955 "filename": "/dev/nullb0", 00:11:31.955 "name": "null0" 00:11:31.955 }, 00:11:31.955 "method": "bdev_xnvme_create" 00:11:31.955 }, 00:11:31.955 { 00:11:31.955 "method": "bdev_wait_for_examine" 00:11:31.955 } 00:11:31.955 ] 00:11:31.955 } 00:11:31.955 ] 00:11:31.955 } 00:11:31.955 [2024-12-03 10:39:02.378657] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:31.955 [2024-12-03 10:39:02.378767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67283 ] 00:11:31.955 [2024-12-03 10:39:02.524732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.213 [2024-12-03 10:39:02.692159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.472 Running I/O for 5 seconds... 00:11:37.736 00:11:37.737 Latency(us) 00:11:37.737 [2024-12-03T10:39:08.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.737 [2024-12-03T10:39:08.350Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:37.737 null0 : 5.00 239108.65 934.02 0.00 0.00 265.68 152.02 1676.21 00:11:37.737 [2024-12-03T10:39:08.350Z] =================================================================================================================== 00:11:37.737 [2024-12-03T10:39:08.350Z] Total : 239108.65 934.02 0.00 0.00 265.68 152.02 1676.21 00:11:37.997 10:39:08 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:11:37.997 10:39:08 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:38.349 ************************************ 00:11:38.349 END TEST xnvme_bdevperf 00:11:38.349 ************************************ 00:11:38.349 00:11:38.349 real 0m12.601s 00:11:38.349 user 0m10.077s 00:11:38.349 sys 0m2.286s 00:11:38.349 10:39:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:38.349 10:39:08 -- common/autotest_common.sh@10 -- # set +x 00:11:38.349 ************************************ 00:11:38.349 END TEST nvme_xnvme 00:11:38.349 ************************************ 00:11:38.349 00:11:38.349 real 0m40.470s 00:11:38.349 user 0m34.067s 00:11:38.349 sys 0m5.565s 00:11:38.349 10:39:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:38.349 10:39:08 -- common/autotest_common.sh@10 -- # set +x 00:11:38.349 10:39:08 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:38.349 10:39:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:38.349 10:39:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:38.349 10:39:08 -- common/autotest_common.sh@10 -- # set +x 00:11:38.349 ************************************ 00:11:38.349 START TEST blockdev_xnvme 00:11:38.349 ************************************ 00:11:38.349 10:39:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:38.349 * Looking for test storage... 00:11:38.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:38.349 10:39:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:38.349 10:39:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:38.349 10:39:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:38.349 10:39:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:38.349 10:39:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:38.349 10:39:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:38.350 10:39:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:38.350 10:39:08 -- scripts/common.sh@335 -- # IFS=.-: 00:11:38.350 10:39:08 -- scripts/common.sh@335 -- # read -ra ver1 00:11:38.350 10:39:08 -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.350 10:39:08 -- scripts/common.sh@336 -- # read -ra ver2 00:11:38.350 10:39:08 -- scripts/common.sh@337 -- # local 'op=<' 00:11:38.350 10:39:08 -- scripts/common.sh@339 -- # ver1_l=2 00:11:38.350 10:39:08 -- scripts/common.sh@340 -- # ver2_l=1 00:11:38.350 10:39:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:38.350 10:39:08 -- scripts/common.sh@343 -- # case "$op" in 00:11:38.350 10:39:08 -- scripts/common.sh@344 -- # : 1 00:11:38.350 10:39:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:38.350 10:39:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.350 10:39:08 -- scripts/common.sh@364 -- # decimal 1 00:11:38.350 10:39:08 -- scripts/common.sh@352 -- # local d=1 00:11:38.350 10:39:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.350 10:39:08 -- scripts/common.sh@354 -- # echo 1 00:11:38.350 10:39:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:38.350 10:39:08 -- scripts/common.sh@365 -- # decimal 2 00:11:38.350 10:39:08 -- scripts/common.sh@352 -- # local d=2 00:11:38.350 10:39:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.350 10:39:08 -- scripts/common.sh@354 -- # echo 2 00:11:38.350 10:39:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:38.350 10:39:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:38.350 10:39:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:38.350 10:39:08 -- scripts/common.sh@367 -- # return 0 00:11:38.350 10:39:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.350 10:39:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:38.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.350 --rc genhtml_branch_coverage=1 00:11:38.350 --rc genhtml_function_coverage=1 00:11:38.350 --rc genhtml_legend=1 00:11:38.350 --rc geninfo_all_blocks=1 00:11:38.350 --rc geninfo_unexecuted_blocks=1 00:11:38.350 00:11:38.350 ' 00:11:38.350 10:39:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:38.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.350 --rc genhtml_branch_coverage=1 00:11:38.350 --rc genhtml_function_coverage=1 00:11:38.350 --rc genhtml_legend=1 00:11:38.350 --rc geninfo_all_blocks=1 00:11:38.350 --rc geninfo_unexecuted_blocks=1 00:11:38.350 00:11:38.350 ' 00:11:38.350 10:39:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:38.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.350 --rc genhtml_branch_coverage=1 00:11:38.350 --rc genhtml_function_coverage=1 00:11:38.350 --rc genhtml_legend=1 00:11:38.350 --rc geninfo_all_blocks=1 00:11:38.350 --rc geninfo_unexecuted_blocks=1 00:11:38.350 00:11:38.350 ' 00:11:38.350 10:39:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:38.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.350 --rc genhtml_branch_coverage=1 00:11:38.350 --rc genhtml_function_coverage=1 00:11:38.350 --rc genhtml_legend=1 00:11:38.350 --rc geninfo_all_blocks=1 00:11:38.350 --rc geninfo_unexecuted_blocks=1 00:11:38.350 00:11:38.350 ' 00:11:38.350 10:39:08 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:38.350 10:39:08 -- bdev/nbd_common.sh@6 -- # set -e 00:11:38.350 10:39:08 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:38.350 10:39:08 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:38.350 10:39:08 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:38.350 10:39:08 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:38.350 10:39:08 -- bdev/blockdev.sh@18 -- # : 00:11:38.350 10:39:08 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:38.350 10:39:08 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:38.350 10:39:08 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:38.350 10:39:08 -- bdev/blockdev.sh@672 -- # uname -s 00:11:38.350 10:39:08 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:38.350 10:39:08 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:38.350 10:39:08 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:11:38.350 10:39:08 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:38.350 10:39:08 -- bdev/blockdev.sh@682 -- # dek= 00:11:38.350 10:39:08 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:38.350 10:39:08 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:38.350 10:39:08 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:38.350 10:39:08 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:11:38.350 10:39:08 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:11:38.350 10:39:08 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:38.350 10:39:08 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=67428 00:11:38.350 10:39:08 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:38.350 10:39:08 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:38.350 10:39:08 -- bdev/blockdev.sh@47 -- # waitforlisten 67428 00:11:38.350 10:39:08 -- common/autotest_common.sh@829 -- # '[' -z 67428 ']' 00:11:38.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.350 10:39:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.350 10:39:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.350 10:39:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.350 10:39:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.350 10:39:08 -- common/autotest_common.sh@10 -- # set +x 00:11:38.350 [2024-12-03 10:39:08.951594] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:38.350 [2024-12-03 10:39:08.951744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67428 ] 00:11:38.650 [2024-12-03 10:39:09.104182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.911 [2024-12-03 10:39:09.376211] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:38.911 [2024-12-03 10:39:09.376469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.296 10:39:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.296 10:39:10 -- common/autotest_common.sh@862 -- # return 0 00:11:40.296 10:39:10 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:40.296 10:39:10 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:11:40.296 10:39:10 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:11:40.296 10:39:10 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:11:40.296 10:39:10 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:40.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:40.556 Waiting for block devices as requested 00:11:40.556 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.556 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.814 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.814 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:46.081 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:46.081 10:39:16 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:11:46.081 10:39:16 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:11:46.081 10:39:16 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:11:46.081 10:39:16 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:11:46.081 10:39:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:46.081 10:39:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:11:46.081 10:39:16 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:11:46.081 10:39:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:11:46.081 10:39:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:46.081 10:39:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:46.081 10:39:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:11:46.081 10:39:16 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:11:46.081 10:39:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:46.081 10:39:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:46.081 10:39:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:46.081 10:39:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:11:46.081 10:39:16 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:11:46.081 10:39:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:46.081 10:39:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:46.081 10:39:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:46.081 10:39:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:11:46.081 10:39:16 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:11:46.081 10:39:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:46.081 10:39:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:46.081 10:39:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:46.081 10:39:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:11:46.081 10:39:16 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:11:46.081 10:39:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:46.081 10:39:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:46.081 10:39:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:46.081 10:39:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:11:46.081 10:39:16 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:11:46.081 10:39:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:46.081 10:39:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:46.081 10:39:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:46.081 10:39:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:11:46.081 10:39:16 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:11:46.081 10:39:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:46.081 10:39:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:46.081 10:39:16 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:46.081 10:39:16 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:46.081 10:39:16 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:46.081 10:39:16 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:46.081 10:39:16 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:46.081 10:39:16 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:46.081 10:39:16 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:46.081 10:39:16 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:46.081 10:39:16 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:46.081 10:39:16 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:46.081 10:39:16 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:46.081 10:39:16 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:11:46.081 10:39:16 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:11:46.081 10:39:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.081 10:39:16 -- common/autotest_common.sh@10 -- # set +x 00:11:46.081 10:39:16 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:11:46.081 nvme0n1 00:11:46.081 nvme1n1 00:11:46.081 nvme1n2 00:11:46.081 nvme1n3 00:11:46.081 nvme2n1 00:11:46.081 nvme3n1 00:11:46.081 10:39:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:46.081 10:39:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.081 10:39:16 -- common/autotest_common.sh@10 -- # set +x 00:11:46.081 10:39:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@738 -- # cat 00:11:46.081 10:39:16 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:46.081 10:39:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.081 10:39:16 -- common/autotest_common.sh@10 -- # set +x 00:11:46.081 10:39:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.081 10:39:16 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:46.081 10:39:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.081 10:39:16 -- common/autotest_common.sh@10 -- # set +x 00:11:46.081 10:39:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.082 10:39:16 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:46.082 10:39:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.082 10:39:16 -- common/autotest_common.sh@10 -- # set +x 00:11:46.082 10:39:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.082 10:39:16 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:46.082 10:39:16 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:46.082 10:39:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.082 10:39:16 -- common/autotest_common.sh@10 -- # set +x 00:11:46.082 10:39:16 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:46.082 10:39:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.082 10:39:16 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:46.082 10:39:16 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:46.082 10:39:16 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "5e618844-360c-4c7f-a1a2-606223b6805f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5e618844-360c-4c7f-a1a2-606223b6805f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8e7adb8e-74f5-4074-96df-cdc930b404f6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8e7adb8e-74f5-4074-96df-cdc930b404f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "1041285a-a5b8-4322-9d1b-8c5a9cd10230"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1041285a-a5b8-4322-9d1b-8c5a9cd10230",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "409126dc-116d-454f-a050-314f3dafde3a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "409126dc-116d-454f-a050-314f3dafde3a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "8c45fcb0-ffb1-4e82-9b0f-125171f49431"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8c45fcb0-ffb1-4e82-9b0f-125171f49431",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e6651701-0727-41dc-8a23-fdf3329db02b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e6651701-0727-41dc-8a23-fdf3329db02b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:11:46.082 10:39:16 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:46.082 10:39:16 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:11:46.082 10:39:16 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:46.082 10:39:16 -- bdev/blockdev.sh@752 -- # killprocess 67428 00:11:46.082 10:39:16 -- common/autotest_common.sh@936 -- # '[' -z 67428 ']' 00:11:46.082 10:39:16 -- common/autotest_common.sh@940 -- # kill -0 67428 00:11:46.082 10:39:16 -- common/autotest_common.sh@941 -- # uname 00:11:46.082 10:39:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:46.082 10:39:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67428 00:11:46.082 killing process with pid 67428 00:11:46.082 10:39:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:46.082 10:39:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:46.082 10:39:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67428' 00:11:46.082 10:39:16 -- common/autotest_common.sh@955 -- # kill 67428 00:11:46.082 10:39:16 -- common/autotest_common.sh@960 -- # wait 67428 00:11:47.458 10:39:17 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:47.458 10:39:17 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:47.458 10:39:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:47.458 10:39:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:47.458 10:39:17 -- common/autotest_common.sh@10 -- # set +x 00:11:47.458 ************************************ 00:11:47.458 START TEST bdev_hello_world 00:11:47.458 ************************************ 00:11:47.458 10:39:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:47.458 [2024-12-03 10:39:17.864446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:47.458 [2024-12-03 10:39:17.864682] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67817 ] 00:11:47.458 [2024-12-03 10:39:18.011442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.717 [2024-12-03 10:39:18.177881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.975 [2024-12-03 10:39:18.481503] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:47.975 [2024-12-03 10:39:18.481676] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:11:47.975 [2024-12-03 10:39:18.481695] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:47.975 [2024-12-03 10:39:18.483263] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:47.975 [2024-12-03 10:39:18.483796] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:47.975 [2024-12-03 10:39:18.483815] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:47.975 [2024-12-03 10:39:18.484393] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:47.975 00:11:47.975 [2024-12-03 10:39:18.484432] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:48.544 00:11:48.544 ************************************ 00:11:48.544 END TEST bdev_hello_world 00:11:48.544 ************************************ 00:11:48.544 real 0m1.327s 00:11:48.544 user 0m1.033s 00:11:48.544 sys 0m0.177s 00:11:48.544 10:39:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:48.544 10:39:19 -- common/autotest_common.sh@10 -- # set +x 00:11:48.804 10:39:19 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:48.804 10:39:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:48.804 10:39:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.804 10:39:19 -- common/autotest_common.sh@10 -- # set +x 00:11:48.804 ************************************ 00:11:48.804 START TEST bdev_bounds 00:11:48.804 ************************************ 00:11:48.804 10:39:19 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:11:48.804 Process bdevio pid: 67854 00:11:48.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.804 10:39:19 -- bdev/blockdev.sh@288 -- # bdevio_pid=67854 00:11:48.804 10:39:19 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:48.804 10:39:19 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 67854' 00:11:48.804 10:39:19 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:48.804 10:39:19 -- bdev/blockdev.sh@291 -- # waitforlisten 67854 00:11:48.804 10:39:19 -- common/autotest_common.sh@829 -- # '[' -z 67854 ']' 00:11:48.804 10:39:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.804 10:39:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:48.804 10:39:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.804 10:39:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:48.804 10:39:19 -- common/autotest_common.sh@10 -- # set +x 00:11:48.804 [2024-12-03 10:39:19.262248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:48.804 [2024-12-03 10:39:19.262363] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67854 ] 00:11:49.063 [2024-12-03 10:39:19.415377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:49.063 [2024-12-03 10:39:19.596968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.063 [2024-12-03 10:39:19.597187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.063 [2024-12-03 10:39:19.597250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.630 10:39:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:49.630 10:39:20 -- common/autotest_common.sh@862 -- # return 0 00:11:49.630 10:39:20 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:49.630 I/O targets: 00:11:49.630 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:49.630 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:49.630 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:49.630 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:49.630 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:49.630 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:11:49.630 00:11:49.630 00:11:49.630 CUnit - A unit testing framework for C - Version 2.1-3 00:11:49.630 http://cunit.sourceforge.net/ 00:11:49.630 00:11:49.630 00:11:49.630 Suite: bdevio tests on: nvme3n1 00:11:49.630 Test: blockdev write read block ...passed 00:11:49.630 Test: blockdev write zeroes read block ...passed 00:11:49.630 Test: blockdev write zeroes read no split ...passed 00:11:49.630 Test: blockdev write zeroes read split ...passed 00:11:49.630 Test: blockdev write zeroes read split partial ...passed 00:11:49.630 Test: blockdev reset ...passed 00:11:49.630 Test: blockdev write read 8 blocks ...passed 00:11:49.630 Test: blockdev write read size > 128k ...passed 00:11:49.630 Test: blockdev write read invalid size ...passed 00:11:49.630 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.630 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.630 Test: blockdev write read max offset ...passed 00:11:49.630 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.630 Test: blockdev writev readv 8 blocks ...passed 00:11:49.630 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.630 Test: blockdev writev readv block ...passed 00:11:49.630 Test: blockdev writev readv size > 128k ...passed 00:11:49.630 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.630 Test: blockdev comparev and writev ...passed 00:11:49.630 Test: blockdev nvme passthru rw ...passed 00:11:49.630 Test: blockdev nvme passthru vendor specific ...passed 00:11:49.630 Test: blockdev nvme admin passthru ...passed 00:11:49.630 Test: blockdev copy ...passed 00:11:49.630 Suite: bdevio tests on: nvme2n1 00:11:49.630 Test: blockdev write read block ...passed 00:11:49.630 Test: blockdev write zeroes read block ...passed 00:11:49.888 Test: blockdev write zeroes read no split ...passed 00:11:49.888 Test: blockdev write zeroes read split ...passed 00:11:49.888 Test: blockdev write zeroes read split partial ...passed 00:11:49.888 Test: blockdev reset ...passed 00:11:49.888 Test: blockdev write read 8 blocks ...passed 00:11:49.888 Test: blockdev write read size > 128k ...passed 00:11:49.888 Test: blockdev write read invalid size ...passed 00:11:49.888 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.888 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.888 Test: blockdev write read max offset ...passed 00:11:49.888 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.888 Test: blockdev writev readv 8 blocks ...passed 00:11:49.888 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.888 Test: blockdev writev readv block ...passed 00:11:49.888 Test: blockdev writev readv size > 128k ...passed 00:11:49.888 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.888 Test: blockdev comparev and writev ...passed 00:11:49.888 Test: blockdev nvme passthru rw ...passed 00:11:49.888 Test: blockdev nvme passthru vendor specific ...passed 00:11:49.888 Test: blockdev nvme admin passthru ...passed 00:11:49.888 Test: blockdev copy ...passed 00:11:49.888 Suite: bdevio tests on: nvme1n3 00:11:49.888 Test: blockdev write read block ...passed 00:11:49.888 Test: blockdev write zeroes read block ...passed 00:11:49.888 Test: blockdev write zeroes read no split ...passed 00:11:49.888 Test: blockdev write zeroes read split ...passed 00:11:49.888 Test: blockdev write zeroes read split partial ...passed 00:11:49.888 Test: blockdev reset ...passed 00:11:49.888 Test: blockdev write read 8 blocks ...passed 00:11:49.888 Test: blockdev write read size > 128k ...passed 00:11:49.888 Test: blockdev write read invalid size ...passed 00:11:49.888 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.888 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.888 Test: blockdev write read max offset ...passed 00:11:49.888 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.888 Test: blockdev writev readv 8 blocks ...passed 00:11:49.888 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.888 Test: blockdev writev readv block ...passed 00:11:49.888 Test: blockdev writev readv size > 128k ...passed 00:11:49.888 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.888 Test: blockdev comparev and writev ...passed 00:11:49.888 Test: blockdev nvme passthru rw ...passed 00:11:49.888 Test: blockdev nvme passthru vendor specific ...passed 00:11:49.888 Test: blockdev nvme admin passthru ...passed 00:11:49.888 Test: blockdev copy ...passed 00:11:49.888 Suite: bdevio tests on: nvme1n2 00:11:49.888 Test: blockdev write read block ...passed 00:11:49.888 Test: blockdev write zeroes read block ...passed 00:11:49.888 Test: blockdev write zeroes read no split ...passed 00:11:49.888 Test: blockdev write zeroes read split ...passed 00:11:49.888 Test: blockdev write zeroes read split partial ...passed 00:11:49.888 Test: blockdev reset ...passed 00:11:49.888 Test: blockdev write read 8 blocks ...passed 00:11:49.888 Test: blockdev write read size > 128k ...passed 00:11:49.888 Test: blockdev write read invalid size ...passed 00:11:49.888 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.888 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.888 Test: blockdev write read max offset ...passed 00:11:49.888 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.888 Test: blockdev writev readv 8 blocks ...passed 00:11:49.888 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.888 Test: blockdev writev readv block ...passed 00:11:49.888 Test: blockdev writev readv size > 128k ...passed 00:11:49.888 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.888 Test: blockdev comparev and writev ...passed 00:11:49.888 Test: blockdev nvme passthru rw ...passed 00:11:49.888 Test: blockdev nvme passthru vendor specific ...passed 00:11:49.888 Test: blockdev nvme admin passthru ...passed 00:11:49.888 Test: blockdev copy ...passed 00:11:49.888 Suite: bdevio tests on: nvme1n1 00:11:49.888 Test: blockdev write read block ...passed 00:11:49.888 Test: blockdev write zeroes read block ...passed 00:11:49.888 Test: blockdev write zeroes read no split ...passed 00:11:49.888 Test: blockdev write zeroes read split ...passed 00:11:49.888 Test: blockdev write zeroes read split partial ...passed 00:11:49.888 Test: blockdev reset ...passed 00:11:49.888 Test: blockdev write read 8 blocks ...passed 00:11:49.888 Test: blockdev write read size > 128k ...passed 00:11:49.888 Test: blockdev write read invalid size ...passed 00:11:49.888 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.888 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.888 Test: blockdev write read max offset ...passed 00:11:49.888 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.888 Test: blockdev writev readv 8 blocks ...passed 00:11:49.888 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.888 Test: blockdev writev readv block ...passed 00:11:49.888 Test: blockdev writev readv size > 128k ...passed 00:11:49.888 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.888 Test: blockdev comparev and writev ...passed 00:11:49.888 Test: blockdev nvme passthru rw ...passed 00:11:49.888 Test: blockdev nvme passthru vendor specific ...passed 00:11:49.888 Test: blockdev nvme admin passthru ...passed 00:11:49.888 Test: blockdev copy ...passed 00:11:49.888 Suite: bdevio tests on: nvme0n1 00:11:49.888 Test: blockdev write read block ...passed 00:11:49.888 Test: blockdev write zeroes read block ...passed 00:11:49.888 Test: blockdev write zeroes read no split ...passed 00:11:50.146 Test: blockdev write zeroes read split ...passed 00:11:50.146 Test: blockdev write zeroes read split partial ...passed 00:11:50.146 Test: blockdev reset ...passed 00:11:50.146 Test: blockdev write read 8 blocks ...passed 00:11:50.146 Test: blockdev write read size > 128k ...passed 00:11:50.146 Test: blockdev write read invalid size ...passed 00:11:50.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:50.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:50.146 Test: blockdev write read max offset ...passed 00:11:50.146 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:50.146 Test: blockdev writev readv 8 blocks ...passed 00:11:50.146 Test: blockdev writev readv 30 x 1block ...passed 00:11:50.146 Test: blockdev writev readv block ...passed 00:11:50.146 Test: blockdev writev readv size > 128k ...passed 00:11:50.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:50.146 Test: blockdev comparev and writev ...passed 00:11:50.146 Test: blockdev nvme passthru rw ...passed 00:11:50.146 Test: blockdev nvme passthru vendor specific ...passed 00:11:50.146 Test: blockdev nvme admin passthru ...passed 00:11:50.146 Test: blockdev copy ...passed 00:11:50.146 00:11:50.146 Run Summary: Type Total Ran Passed Failed Inactive 00:11:50.146 suites 6 6 n/a 0 0 00:11:50.146 tests 138 138 138 0 0 00:11:50.146 asserts 780 780 780 0 n/a 00:11:50.146 00:11:50.146 Elapsed time = 0.975 seconds 00:11:50.146 0 00:11:50.146 10:39:20 -- bdev/blockdev.sh@293 -- # killprocess 67854 00:11:50.146 10:39:20 -- common/autotest_common.sh@936 -- # '[' -z 67854 ']' 00:11:50.146 10:39:20 -- common/autotest_common.sh@940 -- # kill -0 67854 00:11:50.146 10:39:20 -- common/autotest_common.sh@941 -- # uname 00:11:50.146 10:39:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:50.146 10:39:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67854 00:11:50.146 10:39:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:50.146 10:39:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:50.146 10:39:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67854' 00:11:50.146 killing process with pid 67854 00:11:50.146 10:39:20 -- common/autotest_common.sh@955 -- # kill 67854 00:11:50.146 10:39:20 -- common/autotest_common.sh@960 -- # wait 67854 00:11:50.714 10:39:21 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:50.714 00:11:50.714 real 0m2.037s 00:11:50.714 user 0m4.723s 00:11:50.714 sys 0m0.305s 00:11:50.714 10:39:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:50.714 10:39:21 -- common/autotest_common.sh@10 -- # set +x 00:11:50.714 ************************************ 00:11:50.714 END TEST bdev_bounds 00:11:50.714 ************************************ 00:11:50.714 10:39:21 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:11:50.714 10:39:21 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:11:50.714 10:39:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:50.714 10:39:21 -- common/autotest_common.sh@10 -- # set +x 00:11:50.714 ************************************ 00:11:50.714 START TEST bdev_nbd 00:11:50.714 ************************************ 00:11:50.714 10:39:21 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:11:50.714 10:39:21 -- bdev/blockdev.sh@298 -- # uname -s 00:11:50.714 10:39:21 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:50.714 10:39:21 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.714 10:39:21 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:50.714 10:39:21 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:50.714 10:39:21 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:50.714 10:39:21 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:11:50.714 10:39:21 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:50.714 10:39:21 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:50.714 10:39:21 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:50.714 10:39:21 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:11:50.714 10:39:21 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:50.714 10:39:21 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:50.714 10:39:21 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:50.714 10:39:21 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:50.714 10:39:21 -- bdev/blockdev.sh@316 -- # nbd_pid=67909 00:11:50.714 10:39:21 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:50.714 10:39:21 -- bdev/blockdev.sh@318 -- # waitforlisten 67909 /var/tmp/spdk-nbd.sock 00:11:50.714 10:39:21 -- common/autotest_common.sh@829 -- # '[' -z 67909 ']' 00:11:50.714 10:39:21 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:50.714 10:39:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:50.714 10:39:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.714 10:39:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:50.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:50.714 10:39:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.714 10:39:21 -- common/autotest_common.sh@10 -- # set +x 00:11:50.972 [2024-12-03 10:39:21.376124] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:50.972 [2024-12-03 10:39:21.376244] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.972 [2024-12-03 10:39:21.515014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.230 [2024-12-03 10:39:21.684895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.796 10:39:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.796 10:39:22 -- common/autotest_common.sh@862 -- # return 0 00:11:51.796 10:39:22 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@24 -- # local i 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:51.796 10:39:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:51.796 10:39:22 -- common/autotest_common.sh@867 -- # local i 00:11:51.796 10:39:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:51.796 10:39:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:51.796 10:39:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:51.796 10:39:22 -- common/autotest_common.sh@871 -- # break 00:11:51.796 10:39:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:51.796 10:39:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:51.796 10:39:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:51.796 1+0 records in 00:11:51.796 1+0 records out 00:11:51.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000864242 s, 4.7 MB/s 00:11:51.796 10:39:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.796 10:39:22 -- common/autotest_common.sh@884 -- # size=4096 00:11:51.796 10:39:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.796 10:39:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:51.796 10:39:22 -- common/autotest_common.sh@887 -- # return 0 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:51.796 10:39:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:11:52.053 10:39:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:52.053 10:39:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:52.053 10:39:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:52.053 10:39:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:52.053 10:39:22 -- common/autotest_common.sh@867 -- # local i 00:11:52.053 10:39:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:52.053 10:39:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:52.053 10:39:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:52.053 10:39:22 -- common/autotest_common.sh@871 -- # break 00:11:52.053 10:39:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:52.053 10:39:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:52.053 10:39:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.053 1+0 records in 00:11:52.053 1+0 records out 00:11:52.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0008602 s, 4.8 MB/s 00:11:52.053 10:39:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.053 10:39:22 -- common/autotest_common.sh@884 -- # size=4096 00:11:52.053 10:39:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.053 10:39:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:52.053 10:39:22 -- common/autotest_common.sh@887 -- # return 0 00:11:52.053 10:39:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:52.053 10:39:22 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:52.053 10:39:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:11:52.310 10:39:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:52.310 10:39:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:52.310 10:39:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:52.310 10:39:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:52.310 10:39:22 -- common/autotest_common.sh@867 -- # local i 00:11:52.310 10:39:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:52.310 10:39:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:52.310 10:39:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:52.310 10:39:22 -- common/autotest_common.sh@871 -- # break 00:11:52.310 10:39:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:52.310 10:39:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:52.310 10:39:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.310 1+0 records in 00:11:52.310 1+0 records out 00:11:52.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102263 s, 4.0 MB/s 00:11:52.310 10:39:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.310 10:39:22 -- common/autotest_common.sh@884 -- # size=4096 00:11:52.310 10:39:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.310 10:39:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:52.310 10:39:22 -- common/autotest_common.sh@887 -- # return 0 00:11:52.310 10:39:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:52.310 10:39:22 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:52.310 10:39:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:11:52.567 10:39:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:52.567 10:39:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:52.567 10:39:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:52.567 10:39:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:52.567 10:39:23 -- common/autotest_common.sh@867 -- # local i 00:11:52.567 10:39:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:52.567 10:39:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:52.567 10:39:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:52.567 10:39:23 -- common/autotest_common.sh@871 -- # break 00:11:52.567 10:39:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:52.567 10:39:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:52.567 10:39:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.567 1+0 records in 00:11:52.567 1+0 records out 00:11:52.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000877303 s, 4.7 MB/s 00:11:52.567 10:39:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.567 10:39:23 -- common/autotest_common.sh@884 -- # size=4096 00:11:52.567 10:39:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.567 10:39:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:52.567 10:39:23 -- common/autotest_common.sh@887 -- # return 0 00:11:52.567 10:39:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:52.567 10:39:23 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:52.567 10:39:23 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:11:52.824 10:39:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:52.824 10:39:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:52.824 10:39:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:52.824 10:39:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:52.824 10:39:23 -- common/autotest_common.sh@867 -- # local i 00:11:52.824 10:39:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:52.824 10:39:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:52.824 10:39:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:52.824 10:39:23 -- common/autotest_common.sh@871 -- # break 00:11:52.824 10:39:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:52.824 10:39:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:52.824 10:39:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.824 1+0 records in 00:11:52.824 1+0 records out 00:11:52.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120758 s, 3.4 MB/s 00:11:52.824 10:39:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.824 10:39:23 -- common/autotest_common.sh@884 -- # size=4096 00:11:52.824 10:39:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.824 10:39:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:52.824 10:39:23 -- common/autotest_common.sh@887 -- # return 0 00:11:52.824 10:39:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:52.824 10:39:23 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:52.824 10:39:23 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:11:53.084 10:39:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:53.084 10:39:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:53.084 10:39:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:53.084 10:39:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:53.084 10:39:23 -- common/autotest_common.sh@867 -- # local i 00:11:53.084 10:39:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:53.084 10:39:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:53.084 10:39:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:53.084 10:39:23 -- common/autotest_common.sh@871 -- # break 00:11:53.084 10:39:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:53.084 10:39:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:53.084 10:39:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.084 1+0 records in 00:11:53.084 1+0 records out 00:11:53.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00184889 s, 2.2 MB/s 00:11:53.084 10:39:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.084 10:39:23 -- common/autotest_common.sh@884 -- # size=4096 00:11:53.084 10:39:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.084 10:39:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:53.084 10:39:23 -- common/autotest_common.sh@887 -- # return 0 00:11:53.084 10:39:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:53.084 10:39:23 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:53.084 10:39:23 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:53.084 10:39:23 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:53.085 { 00:11:53.085 "nbd_device": "/dev/nbd0", 00:11:53.085 "bdev_name": "nvme0n1" 00:11:53.085 }, 00:11:53.085 { 00:11:53.085 "nbd_device": "/dev/nbd1", 00:11:53.085 "bdev_name": "nvme1n1" 00:11:53.085 }, 00:11:53.085 { 00:11:53.085 "nbd_device": "/dev/nbd2", 00:11:53.085 "bdev_name": "nvme1n2" 00:11:53.085 }, 00:11:53.085 { 00:11:53.085 "nbd_device": "/dev/nbd3", 00:11:53.085 "bdev_name": "nvme1n3" 00:11:53.085 }, 00:11:53.085 { 00:11:53.085 "nbd_device": "/dev/nbd4", 00:11:53.085 "bdev_name": "nvme2n1" 00:11:53.085 }, 00:11:53.085 { 00:11:53.085 "nbd_device": "/dev/nbd5", 00:11:53.085 "bdev_name": "nvme3n1" 00:11:53.085 } 00:11:53.085 ]' 00:11:53.085 10:39:23 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:53.085 10:39:23 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:53.085 { 00:11:53.085 "nbd_device": "/dev/nbd0", 00:11:53.085 "bdev_name": "nvme0n1" 00:11:53.085 }, 00:11:53.085 { 00:11:53.085 "nbd_device": "/dev/nbd1", 00:11:53.085 "bdev_name": "nvme1n1" 00:11:53.085 }, 00:11:53.085 { 00:11:53.085 "nbd_device": "/dev/nbd2", 00:11:53.085 "bdev_name": "nvme1n2" 00:11:53.085 }, 00:11:53.085 { 00:11:53.085 "nbd_device": "/dev/nbd3", 00:11:53.085 "bdev_name": "nvme1n3" 00:11:53.085 }, 00:11:53.085 { 00:11:53.085 "nbd_device": "/dev/nbd4", 00:11:53.085 "bdev_name": "nvme2n1" 00:11:53.085 }, 00:11:53.085 { 00:11:53.085 "nbd_device": "/dev/nbd5", 00:11:53.085 "bdev_name": "nvme3n1" 00:11:53.085 } 00:11:53.085 ]' 00:11:53.085 10:39:23 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@51 -- # local i 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@41 -- # break 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.346 10:39:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:53.607 10:39:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:53.607 10:39:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:53.607 10:39:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:53.607 10:39:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.607 10:39:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.607 10:39:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:53.607 10:39:24 -- bdev/nbd_common.sh@41 -- # break 00:11:53.607 10:39:24 -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.607 10:39:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.607 10:39:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:53.866 10:39:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:53.866 10:39:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:53.866 10:39:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:53.866 10:39:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.866 10:39:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.866 10:39:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:53.866 10:39:24 -- bdev/nbd_common.sh@41 -- # break 00:11:53.866 10:39:24 -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.866 10:39:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.866 10:39:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@41 -- # break 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@41 -- # break 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.125 10:39:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:54.383 10:39:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:54.383 10:39:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:54.383 10:39:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:54.383 10:39:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.383 10:39:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.383 10:39:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:54.383 10:39:24 -- bdev/nbd_common.sh@41 -- # break 00:11:54.383 10:39:24 -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.383 10:39:24 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:54.383 10:39:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.383 10:39:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:54.649 10:39:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:54.649 10:39:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:54.649 10:39:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:54.649 10:39:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:54.649 10:39:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:54.649 10:39:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:54.649 10:39:25 -- bdev/nbd_common.sh@65 -- # true 00:11:54.649 10:39:25 -- bdev/nbd_common.sh@65 -- # count=0 00:11:54.649 10:39:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:54.649 10:39:25 -- bdev/nbd_common.sh@122 -- # count=0 00:11:54.649 10:39:25 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:54.649 10:39:25 -- bdev/nbd_common.sh@127 -- # return 0 00:11:54.650 10:39:25 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@12 -- # local i 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:54.650 10:39:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:11:54.914 /dev/nbd0 00:11:54.914 10:39:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:54.914 10:39:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:54.914 10:39:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:54.914 10:39:25 -- common/autotest_common.sh@867 -- # local i 00:11:54.914 10:39:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:54.914 10:39:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:54.914 10:39:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:54.914 10:39:25 -- common/autotest_common.sh@871 -- # break 00:11:54.914 10:39:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:54.914 10:39:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:54.914 10:39:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.914 1+0 records in 00:11:54.914 1+0 records out 00:11:54.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011786 s, 3.5 MB/s 00:11:54.914 10:39:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.914 10:39:25 -- common/autotest_common.sh@884 -- # size=4096 00:11:54.914 10:39:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.914 10:39:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:54.914 10:39:25 -- common/autotest_common.sh@887 -- # return 0 00:11:54.914 10:39:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.914 10:39:25 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:54.914 10:39:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:11:55.175 /dev/nbd1 00:11:55.175 10:39:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:55.175 10:39:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:55.175 10:39:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:55.175 10:39:25 -- common/autotest_common.sh@867 -- # local i 00:11:55.175 10:39:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.175 10:39:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.175 10:39:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:55.175 10:39:25 -- common/autotest_common.sh@871 -- # break 00:11:55.175 10:39:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.175 10:39:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.175 10:39:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.175 1+0 records in 00:11:55.175 1+0 records out 00:11:55.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113147 s, 3.6 MB/s 00:11:55.175 10:39:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.175 10:39:25 -- common/autotest_common.sh@884 -- # size=4096 00:11:55.175 10:39:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.175 10:39:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.175 10:39:25 -- common/autotest_common.sh@887 -- # return 0 00:11:55.175 10:39:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.175 10:39:25 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:55.175 10:39:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:11:55.175 /dev/nbd10 00:11:55.437 10:39:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:55.437 10:39:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:55.437 10:39:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:55.437 10:39:25 -- common/autotest_common.sh@867 -- # local i 00:11:55.437 10:39:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.437 10:39:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.437 10:39:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:55.437 10:39:25 -- common/autotest_common.sh@871 -- # break 00:11:55.437 10:39:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.437 10:39:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.437 10:39:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.437 1+0 records in 00:11:55.437 1+0 records out 00:11:55.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134544 s, 3.0 MB/s 00:11:55.437 10:39:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.437 10:39:25 -- common/autotest_common.sh@884 -- # size=4096 00:11:55.437 10:39:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.437 10:39:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.437 10:39:25 -- common/autotest_common.sh@887 -- # return 0 00:11:55.437 10:39:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.437 10:39:25 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:55.437 10:39:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:11:55.437 /dev/nbd11 00:11:55.437 10:39:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:55.437 10:39:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:55.437 10:39:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:55.437 10:39:26 -- common/autotest_common.sh@867 -- # local i 00:11:55.437 10:39:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.437 10:39:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.437 10:39:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:55.437 10:39:26 -- common/autotest_common.sh@871 -- # break 00:11:55.437 10:39:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.437 10:39:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.437 10:39:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.437 1+0 records in 00:11:55.437 1+0 records out 00:11:55.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111562 s, 3.7 MB/s 00:11:55.437 10:39:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.437 10:39:26 -- common/autotest_common.sh@884 -- # size=4096 00:11:55.437 10:39:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.437 10:39:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.437 10:39:26 -- common/autotest_common.sh@887 -- # return 0 00:11:55.437 10:39:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.437 10:39:26 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:55.437 10:39:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:11:55.698 /dev/nbd12 00:11:55.698 10:39:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:55.698 10:39:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:55.698 10:39:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:55.698 10:39:26 -- common/autotest_common.sh@867 -- # local i 00:11:55.698 10:39:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.698 10:39:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.698 10:39:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:55.698 10:39:26 -- common/autotest_common.sh@871 -- # break 00:11:55.698 10:39:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.698 10:39:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.698 10:39:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.698 1+0 records in 00:11:55.698 1+0 records out 00:11:55.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116175 s, 3.5 MB/s 00:11:55.698 10:39:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.698 10:39:26 -- common/autotest_common.sh@884 -- # size=4096 00:11:55.698 10:39:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.698 10:39:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.698 10:39:26 -- common/autotest_common.sh@887 -- # return 0 00:11:55.698 10:39:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.698 10:39:26 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:55.698 10:39:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:11:55.960 /dev/nbd13 00:11:55.960 10:39:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:55.960 10:39:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:55.960 10:39:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:55.960 10:39:26 -- common/autotest_common.sh@867 -- # local i 00:11:55.960 10:39:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.960 10:39:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.960 10:39:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:55.960 10:39:26 -- common/autotest_common.sh@871 -- # break 00:11:55.960 10:39:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.960 10:39:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.960 10:39:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.960 1+0 records in 00:11:55.960 1+0 records out 00:11:55.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117539 s, 3.5 MB/s 00:11:55.960 10:39:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.960 10:39:26 -- common/autotest_common.sh@884 -- # size=4096 00:11:55.960 10:39:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.960 10:39:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.960 10:39:26 -- common/autotest_common.sh@887 -- # return 0 00:11:55.960 10:39:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.960 10:39:26 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:55.960 10:39:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:55.960 10:39:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:55.960 10:39:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:56.222 { 00:11:56.222 "nbd_device": "/dev/nbd0", 00:11:56.222 "bdev_name": "nvme0n1" 00:11:56.222 }, 00:11:56.222 { 00:11:56.222 "nbd_device": "/dev/nbd1", 00:11:56.222 "bdev_name": "nvme1n1" 00:11:56.222 }, 00:11:56.222 { 00:11:56.222 "nbd_device": "/dev/nbd10", 00:11:56.222 "bdev_name": "nvme1n2" 00:11:56.222 }, 00:11:56.222 { 00:11:56.222 "nbd_device": "/dev/nbd11", 00:11:56.222 "bdev_name": "nvme1n3" 00:11:56.222 }, 00:11:56.222 { 00:11:56.222 "nbd_device": "/dev/nbd12", 00:11:56.222 "bdev_name": "nvme2n1" 00:11:56.222 }, 00:11:56.222 { 00:11:56.222 "nbd_device": "/dev/nbd13", 00:11:56.222 "bdev_name": "nvme3n1" 00:11:56.222 } 00:11:56.222 ]' 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:56.222 { 00:11:56.222 "nbd_device": "/dev/nbd0", 00:11:56.222 "bdev_name": "nvme0n1" 00:11:56.222 }, 00:11:56.222 { 00:11:56.222 "nbd_device": "/dev/nbd1", 00:11:56.222 "bdev_name": "nvme1n1" 00:11:56.222 }, 00:11:56.222 { 00:11:56.222 "nbd_device": "/dev/nbd10", 00:11:56.222 "bdev_name": "nvme1n2" 00:11:56.222 }, 00:11:56.222 { 00:11:56.222 "nbd_device": "/dev/nbd11", 00:11:56.222 "bdev_name": "nvme1n3" 00:11:56.222 }, 00:11:56.222 { 00:11:56.222 "nbd_device": "/dev/nbd12", 00:11:56.222 "bdev_name": "nvme2n1" 00:11:56.222 }, 00:11:56.222 { 00:11:56.222 "nbd_device": "/dev/nbd13", 00:11:56.222 "bdev_name": "nvme3n1" 00:11:56.222 } 00:11:56.222 ]' 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:56.222 /dev/nbd1 00:11:56.222 /dev/nbd10 00:11:56.222 /dev/nbd11 00:11:56.222 /dev/nbd12 00:11:56.222 /dev/nbd13' 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:56.222 /dev/nbd1 00:11:56.222 /dev/nbd10 00:11:56.222 /dev/nbd11 00:11:56.222 /dev/nbd12 00:11:56.222 /dev/nbd13' 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@65 -- # count=6 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@66 -- # echo 6 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@95 -- # count=6 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:56.222 256+0 records in 00:11:56.222 256+0 records out 00:11:56.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00868198 s, 121 MB/s 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:56.222 10:39:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:56.484 256+0 records in 00:11:56.484 256+0 records out 00:11:56.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.24101 s, 4.4 MB/s 00:11:56.484 10:39:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:56.484 10:39:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:56.747 256+0 records in 00:11:56.747 256+0 records out 00:11:56.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247522 s, 4.2 MB/s 00:11:56.747 10:39:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:56.747 10:39:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:57.007 256+0 records in 00:11:57.007 256+0 records out 00:11:57.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.215299 s, 4.9 MB/s 00:11:57.007 10:39:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.007 10:39:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:57.268 256+0 records in 00:11:57.268 256+0 records out 00:11:57.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205359 s, 5.1 MB/s 00:11:57.268 10:39:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.268 10:39:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:57.530 256+0 records in 00:11:57.530 256+0 records out 00:11:57.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.293066 s, 3.6 MB/s 00:11:57.530 10:39:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.530 10:39:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:57.792 256+0 records in 00:11:57.792 256+0 records out 00:11:57.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.248165 s, 4.2 MB/s 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@51 -- # local i 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:57.792 10:39:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:58.051 10:39:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:58.051 10:39:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:58.051 10:39:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:58.051 10:39:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.051 10:39:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.051 10:39:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:58.051 10:39:28 -- bdev/nbd_common.sh@41 -- # break 00:11:58.051 10:39:28 -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.051 10:39:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.051 10:39:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@41 -- # break 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@41 -- # break 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.310 10:39:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:58.569 10:39:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:58.569 10:39:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:58.569 10:39:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:58.569 10:39:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.569 10:39:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.569 10:39:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:58.569 10:39:29 -- bdev/nbd_common.sh@41 -- # break 00:11:58.569 10:39:29 -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.569 10:39:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.569 10:39:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:58.827 10:39:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:58.827 10:39:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:58.827 10:39:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:58.827 10:39:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.827 10:39:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.827 10:39:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:58.827 10:39:29 -- bdev/nbd_common.sh@41 -- # break 00:11:58.827 10:39:29 -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.827 10:39:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.827 10:39:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:59.086 10:39:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:59.086 10:39:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:59.086 10:39:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:59.086 10:39:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.086 10:39:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.086 10:39:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:59.086 10:39:29 -- bdev/nbd_common.sh@41 -- # break 00:11:59.086 10:39:29 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.086 10:39:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:59.086 10:39:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.086 10:39:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:59.086 10:39:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:59.086 10:39:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:59.086 10:39:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:59.344 10:39:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:59.345 10:39:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:59.345 10:39:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:59.345 10:39:29 -- bdev/nbd_common.sh@65 -- # true 00:11:59.345 10:39:29 -- bdev/nbd_common.sh@65 -- # count=0 00:11:59.345 10:39:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:59.345 10:39:29 -- bdev/nbd_common.sh@104 -- # count=0 00:11:59.345 10:39:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:59.345 10:39:29 -- bdev/nbd_common.sh@109 -- # return 0 00:11:59.345 10:39:29 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:59.345 10:39:29 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.345 10:39:29 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:59.345 10:39:29 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:59.345 10:39:29 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:59.345 10:39:29 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:59.345 malloc_lvol_verify 00:11:59.345 10:39:29 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:59.603 dd3e552e-523a-45c7-8a56-abc5244266c1 00:11:59.603 10:39:30 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:59.864 041a5436-9616-4684-b74d-b8cebd916f25 00:11:59.864 10:39:30 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:00.122 /dev/nbd0 00:12:00.122 10:39:30 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:00.122 mke2fs 1.47.0 (5-Feb-2023) 00:12:00.122 Discarding device blocks: 0/4096 done 00:12:00.122 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:00.122 00:12:00.123 Allocating group tables: 0/1 done 00:12:00.123 Writing inode tables: 0/1 done 00:12:00.123 Creating journal (1024 blocks): done 00:12:00.123 Writing superblocks and filesystem accounting information: 0/1 done 00:12:00.123 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@51 -- # local i 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@41 -- # break 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:00.123 10:39:30 -- bdev/nbd_common.sh@147 -- # return 0 00:12:00.123 10:39:30 -- bdev/blockdev.sh@324 -- # killprocess 67909 00:12:00.123 10:39:30 -- common/autotest_common.sh@936 -- # '[' -z 67909 ']' 00:12:00.123 10:39:30 -- common/autotest_common.sh@940 -- # kill -0 67909 00:12:00.123 10:39:30 -- common/autotest_common.sh@941 -- # uname 00:12:00.123 10:39:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:00.123 10:39:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67909 00:12:00.123 killing process with pid 67909 00:12:00.123 10:39:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:00.123 10:39:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:00.123 10:39:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67909' 00:12:00.123 10:39:30 -- common/autotest_common.sh@955 -- # kill 67909 00:12:00.123 10:39:30 -- common/autotest_common.sh@960 -- # wait 67909 00:12:01.062 ************************************ 00:12:01.062 END TEST bdev_nbd 00:12:01.062 ************************************ 00:12:01.062 10:39:31 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:01.062 00:12:01.062 real 0m10.132s 00:12:01.062 user 0m13.502s 00:12:01.062 sys 0m3.436s 00:12:01.062 10:39:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:01.062 10:39:31 -- common/autotest_common.sh@10 -- # set +x 00:12:01.062 10:39:31 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:01.062 10:39:31 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:12:01.062 10:39:31 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:12:01.062 10:39:31 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:01.062 10:39:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:01.062 10:39:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:01.062 10:39:31 -- common/autotest_common.sh@10 -- # set +x 00:12:01.062 ************************************ 00:12:01.062 START TEST bdev_fio 00:12:01.062 ************************************ 00:12:01.062 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:01.062 10:39:31 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:01.062 10:39:31 -- bdev/blockdev.sh@329 -- # local env_context 00:12:01.062 10:39:31 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:01.062 10:39:31 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:01.062 10:39:31 -- bdev/blockdev.sh@337 -- # echo '' 00:12:01.062 10:39:31 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:01.062 10:39:31 -- bdev/blockdev.sh@337 -- # env_context= 00:12:01.062 10:39:31 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:01.062 10:39:31 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:01.062 10:39:31 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:01.062 10:39:31 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:01.062 10:39:31 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:01.062 10:39:31 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:01.062 10:39:31 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:01.062 10:39:31 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:01.062 10:39:31 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:01.062 10:39:31 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:01.062 10:39:31 -- common/autotest_common.sh@1290 -- # cat 00:12:01.062 10:39:31 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:01.062 10:39:31 -- common/autotest_common.sh@1303 -- # cat 00:12:01.062 10:39:31 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:01.062 10:39:31 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:01.062 10:39:31 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:01.062 10:39:31 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:01.062 10:39:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:01.062 10:39:31 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:12:01.062 10:39:31 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:12:01.062 10:39:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:01.062 10:39:31 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:12:01.062 10:39:31 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:12:01.062 10:39:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:01.062 10:39:31 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:12:01.062 10:39:31 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:12:01.062 10:39:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:01.062 10:39:31 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:12:01.062 10:39:31 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:12:01.062 10:39:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:01.062 10:39:31 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:12:01.062 10:39:31 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:12:01.062 10:39:31 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:01.062 10:39:31 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:12:01.062 10:39:31 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:12:01.062 10:39:31 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:01.062 10:39:31 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:01.062 10:39:31 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:01.062 10:39:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:01.062 10:39:31 -- common/autotest_common.sh@10 -- # set +x 00:12:01.062 ************************************ 00:12:01.062 START TEST bdev_fio_rw_verify 00:12:01.062 ************************************ 00:12:01.062 10:39:31 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:01.062 10:39:31 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:01.062 10:39:31 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:01.062 10:39:31 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:01.062 10:39:31 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:01.062 10:39:31 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:01.062 10:39:31 -- common/autotest_common.sh@1330 -- # shift 00:12:01.062 10:39:31 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:01.062 10:39:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:01.062 10:39:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:01.062 10:39:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:01.062 10:39:31 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:01.062 10:39:31 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:01.062 10:39:31 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:01.062 10:39:31 -- common/autotest_common.sh@1336 -- # break 00:12:01.062 10:39:31 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:01.062 10:39:31 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:01.322 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.322 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.322 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.322 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.322 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.322 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.322 fio-3.35 00:12:01.322 Starting 6 threads 00:12:13.564 00:12:13.564 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=68304: Tue Dec 3 10:39:42 2024 00:12:13.564 read: IOPS=14.7k, BW=57.6MiB/s (60.4MB/s)(576MiB/10002msec) 00:12:13.564 slat (usec): min=2, max=1853, avg= 5.91, stdev=12.33 00:12:13.564 clat (usec): min=75, max=224694, avg=1304.88, stdev=1786.80 00:12:13.564 lat (usec): min=79, max=224701, avg=1310.79, stdev=1787.00 00:12:13.564 clat percentiles (usec): 00:12:13.564 | 50.000th=[ 1205], 99.000th=[ 3490], 99.900th=[ 4752], 00:12:13.564 | 99.990th=[ 8094], 99.999th=[225444] 00:12:13.564 write: IOPS=15.1k, BW=59.0MiB/s (61.9MB/s)(590MiB/10002msec); 0 zone resets 00:12:13.564 slat (usec): min=12, max=5687, avg=39.71, stdev=137.90 00:12:13.564 clat (usec): min=98, max=10757, avg=1593.72, stdev=827.81 00:12:13.564 lat (usec): min=114, max=10773, avg=1633.42, stdev=839.78 00:12:13.564 clat percentiles (usec): 00:12:13.564 | 50.000th=[ 1434], 99.000th=[ 4359], 99.900th=[ 6063], 99.990th=[ 7635], 00:12:13.564 | 99.999th=[10683] 00:12:13.564 bw ( KiB/s): min=43190, max=87617, per=100.00%, avg=60913.26, stdev=2001.14, samples=114 00:12:13.564 iops : min=10793, max=21903, avg=15227.00, stdev=500.38, samples=114 00:12:13.564 lat (usec) : 100=0.01%, 250=1.79%, 500=5.78%, 750=8.99%, 1000=12.92% 00:12:13.564 lat (msec) : 2=51.31%, 4=18.19%, 10=1.02%, 20=0.01%, 250=0.01% 00:12:13.564 cpu : usr=46.24%, sys=29.72%, ctx=6200, majf=0, minf=16701 00:12:13.564 IO depths : 1=11.3%, 2=23.7%, 4=51.1%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:13.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.564 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.564 issued rwts: total=147428,151153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.564 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:13.564 00:12:13.564 Run status group 0 (all jobs): 00:12:13.564 READ: bw=57.6MiB/s (60.4MB/s), 57.6MiB/s-57.6MiB/s (60.4MB/s-60.4MB/s), io=576MiB (604MB), run=10002-10002msec 00:12:13.564 WRITE: bw=59.0MiB/s (61.9MB/s), 59.0MiB/s-59.0MiB/s (61.9MB/s-61.9MB/s), io=590MiB (619MB), run=10002-10002msec 00:12:13.564 ----------------------------------------------------- 00:12:13.564 Suppressions used: 00:12:13.564 count bytes template 00:12:13.564 6 48 /usr/src/fio/parse.c 00:12:13.564 3638 349248 /usr/src/fio/iolog.c 00:12:13.564 1 8 libtcmalloc_minimal.so 00:12:13.564 1 904 libcrypto.so 00:12:13.564 ----------------------------------------------------- 00:12:13.564 00:12:13.564 00:12:13.564 real 0m11.878s 00:12:13.564 user 0m29.248s 00:12:13.564 sys 0m18.223s 00:12:13.564 10:39:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:13.564 ************************************ 00:12:13.564 END TEST bdev_fio_rw_verify 00:12:13.564 10:39:43 -- common/autotest_common.sh@10 -- # set +x 00:12:13.564 ************************************ 00:12:13.564 10:39:43 -- bdev/blockdev.sh@348 -- # rm -f 00:12:13.564 10:39:43 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:13.564 10:39:43 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:13.564 10:39:43 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:13.564 10:39:43 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:13.564 10:39:43 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:13.564 10:39:43 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:13.564 10:39:43 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:13.564 10:39:43 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:13.564 10:39:43 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:13.564 10:39:43 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:13.564 10:39:43 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:13.564 10:39:43 -- common/autotest_common.sh@1290 -- # cat 00:12:13.564 10:39:43 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:13.564 10:39:43 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:13.564 10:39:43 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:13.564 10:39:43 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:13.565 10:39:43 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "5e618844-360c-4c7f-a1a2-606223b6805f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5e618844-360c-4c7f-a1a2-606223b6805f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8e7adb8e-74f5-4074-96df-cdc930b404f6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8e7adb8e-74f5-4074-96df-cdc930b404f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "1041285a-a5b8-4322-9d1b-8c5a9cd10230"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1041285a-a5b8-4322-9d1b-8c5a9cd10230",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "409126dc-116d-454f-a050-314f3dafde3a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "409126dc-116d-454f-a050-314f3dafde3a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "8c45fcb0-ffb1-4e82-9b0f-125171f49431"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8c45fcb0-ffb1-4e82-9b0f-125171f49431",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e6651701-0727-41dc-8a23-fdf3329db02b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e6651701-0727-41dc-8a23-fdf3329db02b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:13.565 10:39:43 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:12:13.565 10:39:43 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:13.565 /home/vagrant/spdk_repo/spdk 00:12:13.565 10:39:43 -- bdev/blockdev.sh@360 -- # popd 00:12:13.565 10:39:43 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:12:13.565 10:39:43 -- bdev/blockdev.sh@362 -- # return 0 00:12:13.565 00:12:13.565 real 0m12.046s 00:12:13.565 user 0m29.317s 00:12:13.565 sys 0m18.299s 00:12:13.565 10:39:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:13.565 10:39:43 -- common/autotest_common.sh@10 -- # set +x 00:12:13.565 ************************************ 00:12:13.565 END TEST bdev_fio 00:12:13.565 ************************************ 00:12:13.565 10:39:43 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:13.565 10:39:43 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:13.565 10:39:43 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:13.565 10:39:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:13.565 10:39:43 -- common/autotest_common.sh@10 -- # set +x 00:12:13.565 ************************************ 00:12:13.565 START TEST bdev_verify 00:12:13.565 ************************************ 00:12:13.565 10:39:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:13.565 [2024-12-03 10:39:43.701759] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:13.565 [2024-12-03 10:39:43.701910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68484 ] 00:12:13.565 [2024-12-03 10:39:43.857962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:13.565 [2024-12-03 10:39:44.135606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.565 [2024-12-03 10:39:44.135719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.136 Running I/O for 5 seconds... 00:12:19.433 00:12:19.433 Latency(us) 00:12:19.433 [2024-12-03T10:39:50.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.433 [2024-12-03T10:39:50.046Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:19.433 Verification LBA range: start 0x0 length 0x20000 00:12:19.433 nvme0n1 : 5.09 2200.19 8.59 0.00 0.00 57672.91 16535.24 75416.81 00:12:19.433 [2024-12-03T10:39:50.046Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:19.433 Verification LBA range: start 0x20000 length 0x20000 00:12:19.433 nvme0n1 : 5.07 2170.96 8.48 0.00 0.00 58792.96 12754.31 82272.89 00:12:19.433 [2024-12-03T10:39:50.046Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:19.433 Verification LBA range: start 0x0 length 0x80000 00:12:19.433 nvme1n1 : 5.07 2130.77 8.32 0.00 0.00 59701.44 7763.50 73803.62 00:12:19.433 [2024-12-03T10:39:50.046Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:19.433 Verification LBA range: start 0x80000 length 0x80000 00:12:19.433 nvme1n1 : 5.09 2075.24 8.11 0.00 0.00 61472.38 9931.22 82272.89 00:12:19.433 [2024-12-03T10:39:50.046Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:19.433 Verification LBA range: start 0x0 length 0x80000 00:12:19.433 nvme1n2 : 5.10 2124.54 8.30 0.00 0.00 59828.49 4789.17 79046.50 00:12:19.433 [2024-12-03T10:39:50.046Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:19.433 Verification LBA range: start 0x80000 length 0x80000 00:12:19.433 nvme1n2 : 5.09 2152.78 8.41 0.00 0.00 59083.63 19358.33 72997.02 00:12:19.433 [2024-12-03T10:39:50.046Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:19.433 Verification LBA range: start 0x0 length 0x80000 00:12:19.434 nvme1n3 : 5.10 2203.26 8.61 0.00 0.00 57638.54 7057.72 74206.92 00:12:19.434 [2024-12-03T10:39:50.047Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:19.434 Verification LBA range: start 0x80000 length 0x80000 00:12:19.434 nvme1n3 : 5.09 2172.64 8.49 0.00 0.00 58513.48 15426.17 95581.74 00:12:19.434 [2024-12-03T10:39:50.047Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:19.434 Verification LBA range: start 0x0 length 0xbd0bd 00:12:19.434 nvme2n1 : 5.10 2101.02 8.21 0.00 0.00 60404.33 5041.23 78239.90 00:12:19.434 [2024-12-03T10:39:50.047Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:19.434 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:19.434 nvme2n1 : 5.09 2163.60 8.45 0.00 0.00 58671.17 9527.93 80659.69 00:12:19.434 [2024-12-03T10:39:50.047Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:19.434 Verification LBA range: start 0x0 length 0xa0000 00:12:19.434 nvme3n1 : 5.10 2288.83 8.94 0.00 0.00 55236.87 3881.75 76223.41 00:12:19.434 [2024-12-03T10:39:50.047Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:19.434 Verification LBA range: start 0xa0000 length 0xa0000 00:12:19.434 nvme3n1 : 5.10 2316.75 9.05 0.00 0.00 54623.31 13812.97 77030.01 00:12:19.434 [2024-12-03T10:39:50.047Z] =================================================================================================================== 00:12:19.434 [2024-12-03T10:39:50.047Z] Total : 26100.57 101.96 0.00 0.00 58409.58 3881.75 95581.74 00:12:20.379 00:12:20.379 real 0m7.118s 00:12:20.379 user 0m8.848s 00:12:20.379 sys 0m3.407s 00:12:20.379 10:39:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:20.379 ************************************ 00:12:20.379 END TEST bdev_verify 00:12:20.379 ************************************ 00:12:20.379 10:39:50 -- common/autotest_common.sh@10 -- # set +x 00:12:20.379 10:39:50 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:20.379 10:39:50 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:20.379 10:39:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:20.379 10:39:50 -- common/autotest_common.sh@10 -- # set +x 00:12:20.379 ************************************ 00:12:20.379 START TEST bdev_verify_big_io 00:12:20.379 ************************************ 00:12:20.379 10:39:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:20.379 [2024-12-03 10:39:50.881719] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:20.379 [2024-12-03 10:39:50.881882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68583 ] 00:12:20.641 [2024-12-03 10:39:51.031424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:20.902 [2024-12-03 10:39:51.314280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.902 [2024-12-03 10:39:51.314387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.477 Running I/O for 5 seconds... 00:12:28.066 00:12:28.066 Latency(us) 00:12:28.066 [2024-12-03T10:39:58.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.066 [2024-12-03T10:39:58.679Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:28.066 Verification LBA range: start 0x0 length 0x2000 00:12:28.066 nvme0n1 : 5.51 191.43 11.96 0.00 0.00 645280.60 208102.01 538806.74 00:12:28.066 [2024-12-03T10:39:58.679Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:28.066 Verification LBA range: start 0x2000 length 0x2000 00:12:28.066 nvme0n1 : 5.62 157.48 9.84 0.00 0.00 801640.45 57268.38 1335724.50 00:12:28.066 [2024-12-03T10:39:58.679Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:28.066 Verification LBA range: start 0x0 length 0x8000 00:12:28.066 nvme1n1 : 5.52 204.39 12.77 0.00 0.00 612766.69 14720.39 822728.86 00:12:28.066 [2024-12-03T10:39:58.679Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:28.066 Verification LBA range: start 0x8000 length 0x8000 00:12:28.066 nvme1n1 : 5.60 171.86 10.74 0.00 0.00 710968.44 87919.06 816276.09 00:12:28.066 [2024-12-03T10:39:58.679Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:28.066 Verification LBA range: start 0x0 length 0x8000 00:12:28.066 nvme1n2 : 5.53 190.57 11.91 0.00 0.00 639873.03 15022.87 884030.23 00:12:28.066 [2024-12-03T10:39:58.679Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:28.066 Verification LBA range: start 0x8000 length 0x8000 00:12:28.066 nvme1n2 : 5.62 171.13 10.70 0.00 0.00 684574.19 57671.68 774333.05 00:12:28.066 [2024-12-03T10:39:58.679Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:28.066 Verification LBA range: start 0x0 length 0x8000 00:12:28.066 nvme1n3 : 5.52 190.90 11.93 0.00 0.00 629969.99 27424.30 719484.46 00:12:28.066 [2024-12-03T10:39:58.679Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:28.066 Verification LBA range: start 0x8000 length 0x8000 00:12:28.066 nvme1n3 : 5.65 191.21 11.95 0.00 0.00 599092.96 33877.07 761427.50 00:12:28.066 [2024-12-03T10:39:58.679Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:28.066 Verification LBA range: start 0x0 length 0xbd0b 00:12:28.066 nvme2n1 : 5.52 212.65 13.29 0.00 0.00 559290.90 11947.72 909841.33 00:12:28.066 [2024-12-03T10:39:58.679Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:28.066 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:28.066 nvme2n1 : 5.77 266.69 16.67 0.00 0.00 412418.74 4209.43 929199.66 00:12:28.066 [2024-12-03T10:39:58.679Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:28.066 Verification LBA range: start 0x0 length 0xa000 00:12:28.066 nvme3n1 : 5.52 190.81 11.93 0.00 0.00 614599.23 12703.90 713031.68 00:12:28.066 [2024-12-03T10:39:58.679Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:28.066 Verification LBA range: start 0xa000 length 0xa000 00:12:28.066 nvme3n1 : 6.26 280.87 17.55 0.00 0.00 368555.27 153.60 735616.39 00:12:28.066 [2024-12-03T10:39:58.679Z] =================================================================================================================== 00:12:28.066 [2024-12-03T10:39:58.679Z] Total : 2419.99 151.25 0.00 0.00 583431.75 153.60 1335724.50 00:12:28.634 00:12:28.634 real 0m8.147s 00:12:28.634 user 0m14.586s 00:12:28.634 sys 0m0.609s 00:12:28.634 10:39:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:28.634 ************************************ 00:12:28.634 END TEST bdev_verify_big_io 00:12:28.634 10:39:58 -- common/autotest_common.sh@10 -- # set +x 00:12:28.634 ************************************ 00:12:28.634 10:39:59 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:28.634 10:39:59 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:28.634 10:39:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:28.634 10:39:59 -- common/autotest_common.sh@10 -- # set +x 00:12:28.634 ************************************ 00:12:28.634 START TEST bdev_write_zeroes 00:12:28.634 ************************************ 00:12:28.634 10:39:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:28.634 [2024-12-03 10:39:59.077085] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:28.634 [2024-12-03 10:39:59.077179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68698 ] 00:12:28.634 [2024-12-03 10:39:59.218202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.892 [2024-12-03 10:39:59.392782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.151 Running I/O for 1 seconds... 00:12:30.531 00:12:30.531 Latency(us) 00:12:30.531 [2024-12-03T10:40:01.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.531 [2024-12-03T10:40:01.144Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:30.531 nvme0n1 : 1.01 14357.05 56.08 0.00 0.00 8907.13 6654.42 14821.22 00:12:30.531 [2024-12-03T10:40:01.144Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:30.532 nvme1n1 : 1.01 14340.06 56.02 0.00 0.00 8912.27 6654.42 14821.22 00:12:30.532 [2024-12-03T10:40:01.145Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:30.532 nvme1n2 : 1.01 14323.83 55.95 0.00 0.00 8917.12 6452.78 14922.04 00:12:30.532 [2024-12-03T10:40:01.145Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:30.532 nvme1n3 : 1.01 14307.69 55.89 0.00 0.00 8922.84 6276.33 16031.11 00:12:30.532 [2024-12-03T10:40:01.145Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:30.532 nvme2n1 : 1.02 15474.56 60.45 0.00 0.00 8242.96 3654.89 16535.24 00:12:30.532 [2024-12-03T10:40:01.145Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:30.532 nvme3n1 : 1.01 14289.24 55.82 0.00 0.00 8922.86 6856.07 16031.11 00:12:30.532 [2024-12-03T10:40:01.145Z] =================================================================================================================== 00:12:30.532 [2024-12-03T10:40:01.145Z] Total : 87092.43 340.20 0.00 0.00 8796.04 3654.89 16535.24 00:12:31.473 00:12:31.473 real 0m2.696s 00:12:31.473 user 0m2.069s 00:12:31.473 sys 0m0.464s 00:12:31.473 10:40:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:31.473 ************************************ 00:12:31.473 END TEST bdev_write_zeroes 00:12:31.473 ************************************ 00:12:31.473 10:40:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.473 10:40:01 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:31.473 10:40:01 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:31.473 10:40:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.473 10:40:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.473 ************************************ 00:12:31.473 START TEST bdev_json_nonenclosed 00:12:31.473 ************************************ 00:12:31.473 10:40:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:31.473 [2024-12-03 10:40:01.866420] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:31.473 [2024-12-03 10:40:01.866566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68740 ] 00:12:31.473 [2024-12-03 10:40:02.021521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.732 [2024-12-03 10:40:02.295394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.732 [2024-12-03 10:40:02.295632] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:31.733 [2024-12-03 10:40:02.295654] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:32.307 00:12:32.307 real 0m0.856s 00:12:32.307 user 0m0.595s 00:12:32.307 sys 0m0.152s 00:12:32.307 10:40:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:32.307 ************************************ 00:12:32.307 END TEST bdev_json_nonenclosed 00:12:32.307 ************************************ 00:12:32.307 10:40:02 -- common/autotest_common.sh@10 -- # set +x 00:12:32.307 10:40:02 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:32.307 10:40:02 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:32.307 10:40:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:32.307 10:40:02 -- common/autotest_common.sh@10 -- # set +x 00:12:32.307 ************************************ 00:12:32.307 START TEST bdev_json_nonarray 00:12:32.307 ************************************ 00:12:32.307 10:40:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:32.307 [2024-12-03 10:40:02.787883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:32.307 [2024-12-03 10:40:02.788022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68771 ] 00:12:32.568 [2024-12-03 10:40:02.942998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.839 [2024-12-03 10:40:03.214830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.839 [2024-12-03 10:40:03.215095] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:32.839 [2024-12-03 10:40:03.215122] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:33.125 00:12:33.125 real 0m0.860s 00:12:33.125 user 0m0.598s 00:12:33.125 sys 0m0.152s 00:12:33.125 10:40:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:33.125 10:40:03 -- common/autotest_common.sh@10 -- # set +x 00:12:33.125 ************************************ 00:12:33.125 END TEST bdev_json_nonarray 00:12:33.125 ************************************ 00:12:33.125 10:40:03 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:12:33.125 10:40:03 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:12:33.125 10:40:03 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:12:33.125 10:40:03 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:12:33.125 10:40:03 -- bdev/blockdev.sh@809 -- # cleanup 00:12:33.125 10:40:03 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:33.125 10:40:03 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:33.125 10:40:03 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:12:33.125 10:40:03 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:12:33.125 10:40:03 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:12:33.125 10:40:03 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:12:33.125 10:40:03 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:34.089 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:36.006 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:12:36.006 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:12:36.006 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:12:36.947 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:12:36.947 00:12:36.947 real 0m58.663s 00:12:36.947 user 1m25.096s 00:12:36.947 sys 0m35.490s 00:12:36.947 ************************************ 00:12:36.947 END TEST blockdev_xnvme 00:12:36.947 ************************************ 00:12:36.947 10:40:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:36.947 10:40:07 -- common/autotest_common.sh@10 -- # set +x 00:12:36.947 10:40:07 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:36.947 10:40:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:36.947 10:40:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:36.947 10:40:07 -- common/autotest_common.sh@10 -- # set +x 00:12:36.947 ************************************ 00:12:36.947 START TEST ublk 00:12:36.947 ************************************ 00:12:36.947 10:40:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:36.947 * Looking for test storage... 00:12:36.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:12:36.947 10:40:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:36.947 10:40:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:36.947 10:40:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:37.208 10:40:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:37.208 10:40:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:37.208 10:40:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:37.208 10:40:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:37.208 10:40:07 -- scripts/common.sh@335 -- # IFS=.-: 00:12:37.208 10:40:07 -- scripts/common.sh@335 -- # read -ra ver1 00:12:37.208 10:40:07 -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.208 10:40:07 -- scripts/common.sh@336 -- # read -ra ver2 00:12:37.208 10:40:07 -- scripts/common.sh@337 -- # local 'op=<' 00:12:37.208 10:40:07 -- scripts/common.sh@339 -- # ver1_l=2 00:12:37.208 10:40:07 -- scripts/common.sh@340 -- # ver2_l=1 00:12:37.208 10:40:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:37.208 10:40:07 -- scripts/common.sh@343 -- # case "$op" in 00:12:37.208 10:40:07 -- scripts/common.sh@344 -- # : 1 00:12:37.208 10:40:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:37.208 10:40:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.208 10:40:07 -- scripts/common.sh@364 -- # decimal 1 00:12:37.208 10:40:07 -- scripts/common.sh@352 -- # local d=1 00:12:37.208 10:40:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.208 10:40:07 -- scripts/common.sh@354 -- # echo 1 00:12:37.208 10:40:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:37.208 10:40:07 -- scripts/common.sh@365 -- # decimal 2 00:12:37.208 10:40:07 -- scripts/common.sh@352 -- # local d=2 00:12:37.208 10:40:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.208 10:40:07 -- scripts/common.sh@354 -- # echo 2 00:12:37.208 10:40:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:37.208 10:40:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:37.208 10:40:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:37.208 10:40:07 -- scripts/common.sh@367 -- # return 0 00:12:37.208 10:40:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.208 10:40:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:37.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.208 --rc genhtml_branch_coverage=1 00:12:37.208 --rc genhtml_function_coverage=1 00:12:37.208 --rc genhtml_legend=1 00:12:37.208 --rc geninfo_all_blocks=1 00:12:37.208 --rc geninfo_unexecuted_blocks=1 00:12:37.208 00:12:37.208 ' 00:12:37.208 10:40:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:37.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.208 --rc genhtml_branch_coverage=1 00:12:37.208 --rc genhtml_function_coverage=1 00:12:37.208 --rc genhtml_legend=1 00:12:37.208 --rc geninfo_all_blocks=1 00:12:37.208 --rc geninfo_unexecuted_blocks=1 00:12:37.208 00:12:37.208 ' 00:12:37.208 10:40:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:37.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.208 --rc genhtml_branch_coverage=1 00:12:37.208 --rc genhtml_function_coverage=1 00:12:37.208 --rc genhtml_legend=1 00:12:37.208 --rc geninfo_all_blocks=1 00:12:37.208 --rc geninfo_unexecuted_blocks=1 00:12:37.208 00:12:37.208 ' 00:12:37.208 10:40:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:37.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.208 --rc genhtml_branch_coverage=1 00:12:37.208 --rc genhtml_function_coverage=1 00:12:37.208 --rc genhtml_legend=1 00:12:37.208 --rc geninfo_all_blocks=1 00:12:37.208 --rc geninfo_unexecuted_blocks=1 00:12:37.208 00:12:37.208 ' 00:12:37.208 10:40:07 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:12:37.208 10:40:07 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:12:37.208 10:40:07 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:12:37.208 10:40:07 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:12:37.208 10:40:07 -- lvol/common.sh@9 -- # AIO_BS=4096 00:12:37.208 10:40:07 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:12:37.208 10:40:07 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:12:37.208 10:40:07 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:12:37.208 10:40:07 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:12:37.208 10:40:07 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:12:37.208 10:40:07 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:12:37.208 10:40:07 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:12:37.208 10:40:07 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:12:37.208 10:40:07 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:12:37.208 10:40:07 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:12:37.208 10:40:07 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:12:37.208 10:40:07 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:12:37.208 10:40:07 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:12:37.208 10:40:07 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:12:37.208 10:40:07 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:12:37.208 10:40:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:37.208 10:40:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:37.208 10:40:07 -- common/autotest_common.sh@10 -- # set +x 00:12:37.208 ************************************ 00:12:37.208 START TEST test_save_ublk_config 00:12:37.208 ************************************ 00:12:37.208 10:40:07 -- common/autotest_common.sh@1114 -- # test_save_config 00:12:37.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.208 10:40:07 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:12:37.208 10:40:07 -- ublk/ublk.sh@103 -- # tgtpid=69086 00:12:37.208 10:40:07 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:12:37.208 10:40:07 -- ublk/ublk.sh@106 -- # waitforlisten 69086 00:12:37.208 10:40:07 -- common/autotest_common.sh@829 -- # '[' -z 69086 ']' 00:12:37.208 10:40:07 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:12:37.208 10:40:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.208 10:40:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.208 10:40:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.208 10:40:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.208 10:40:07 -- common/autotest_common.sh@10 -- # set +x 00:12:37.208 [2024-12-03 10:40:07.728578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:37.208 [2024-12-03 10:40:07.728727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69086 ] 00:12:37.469 [2024-12-03 10:40:07.885644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.730 [2024-12-03 10:40:08.167336] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:37.730 [2024-12-03 10:40:08.167604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.675 10:40:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:38.675 10:40:09 -- common/autotest_common.sh@862 -- # return 0 00:12:38.675 10:40:09 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:12:38.675 10:40:09 -- ublk/ublk.sh@108 -- # rpc_cmd 00:12:38.675 10:40:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.675 10:40:09 -- common/autotest_common.sh@10 -- # set +x 00:12:38.675 [2024-12-03 10:40:09.246994] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:38.936 malloc0 00:12:38.936 [2024-12-03 10:40:09.326225] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:38.936 [2024-12-03 10:40:09.326341] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:38.936 [2024-12-03 10:40:09.326350] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:38.936 [2024-12-03 10:40:09.326362] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:38.936 [2024-12-03 10:40:09.335203] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:38.936 [2024-12-03 10:40:09.335243] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:38.936 [2024-12-03 10:40:09.342099] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:38.936 [2024-12-03 10:40:09.342241] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:38.936 [2024-12-03 10:40:09.359088] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:38.936 0 00:12:38.936 10:40:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.936 10:40:09 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:12:38.936 10:40:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.936 10:40:09 -- common/autotest_common.sh@10 -- # set +x 00:12:39.198 10:40:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.198 10:40:09 -- ublk/ublk.sh@115 -- # config='{ 00:12:39.198 "subsystems": [ 00:12:39.198 { 00:12:39.198 "subsystem": "iobuf", 00:12:39.198 "config": [ 00:12:39.198 { 00:12:39.198 "method": "iobuf_set_options", 00:12:39.198 "params": { 00:12:39.198 "small_pool_count": 8192, 00:12:39.198 "large_pool_count": 1024, 00:12:39.198 "small_bufsize": 8192, 00:12:39.198 "large_bufsize": 135168 00:12:39.198 } 00:12:39.198 } 00:12:39.198 ] 00:12:39.198 }, 00:12:39.198 { 00:12:39.198 "subsystem": "sock", 00:12:39.198 "config": [ 00:12:39.198 { 00:12:39.198 "method": "sock_impl_set_options", 00:12:39.198 "params": { 00:12:39.198 "impl_name": "posix", 00:12:39.198 "recv_buf_size": 2097152, 00:12:39.198 "send_buf_size": 2097152, 00:12:39.198 "enable_recv_pipe": true, 00:12:39.198 "enable_quickack": false, 00:12:39.198 "enable_placement_id": 0, 00:12:39.198 "enable_zerocopy_send_server": true, 00:12:39.198 "enable_zerocopy_send_client": false, 00:12:39.198 "zerocopy_threshold": 0, 00:12:39.198 "tls_version": 0, 00:12:39.198 "enable_ktls": false 00:12:39.198 } 00:12:39.198 }, 00:12:39.198 { 00:12:39.198 "method": "sock_impl_set_options", 00:12:39.198 "params": { 00:12:39.198 "impl_name": "ssl", 00:12:39.198 "recv_buf_size": 4096, 00:12:39.198 "send_buf_size": 4096, 00:12:39.199 "enable_recv_pipe": true, 00:12:39.199 "enable_quickack": false, 00:12:39.199 "enable_placement_id": 0, 00:12:39.199 "enable_zerocopy_send_server": true, 00:12:39.199 "enable_zerocopy_send_client": false, 00:12:39.199 "zerocopy_threshold": 0, 00:12:39.199 "tls_version": 0, 00:12:39.199 "enable_ktls": false 00:12:39.199 } 00:12:39.199 } 00:12:39.199 ] 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "subsystem": "vmd", 00:12:39.199 "config": [] 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "subsystem": "accel", 00:12:39.199 "config": [ 00:12:39.199 { 00:12:39.199 "method": "accel_set_options", 00:12:39.199 "params": { 00:12:39.199 "small_cache_size": 128, 00:12:39.199 "large_cache_size": 16, 00:12:39.199 "task_count": 2048, 00:12:39.199 "sequence_count": 2048, 00:12:39.199 "buf_count": 2048 00:12:39.199 } 00:12:39.199 } 00:12:39.199 ] 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "subsystem": "bdev", 00:12:39.199 "config": [ 00:12:39.199 { 00:12:39.199 "method": "bdev_set_options", 00:12:39.199 "params": { 00:12:39.199 "bdev_io_pool_size": 65535, 00:12:39.199 "bdev_io_cache_size": 256, 00:12:39.199 "bdev_auto_examine": true, 00:12:39.199 "iobuf_small_cache_size": 128, 00:12:39.199 "iobuf_large_cache_size": 16 00:12:39.199 } 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "method": "bdev_raid_set_options", 00:12:39.199 "params": { 00:12:39.199 "process_window_size_kb": 1024 00:12:39.199 } 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "method": "bdev_iscsi_set_options", 00:12:39.199 "params": { 00:12:39.199 "timeout_sec": 30 00:12:39.199 } 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "method": "bdev_nvme_set_options", 00:12:39.199 "params": { 00:12:39.199 "action_on_timeout": "none", 00:12:39.199 "timeout_us": 0, 00:12:39.199 "timeout_admin_us": 0, 00:12:39.199 "keep_alive_timeout_ms": 10000, 00:12:39.199 "transport_retry_count": 4, 00:12:39.199 "arbitration_burst": 0, 00:12:39.199 "low_priority_weight": 0, 00:12:39.199 "medium_priority_weight": 0, 00:12:39.199 "high_priority_weight": 0, 00:12:39.199 "nvme_adminq_poll_period_us": 10000, 00:12:39.199 "nvme_ioq_poll_period_us": 0, 00:12:39.199 "io_queue_requests": 0, 00:12:39.199 "delay_cmd_submit": true, 00:12:39.199 "bdev_retry_count": 3, 00:12:39.199 "transport_ack_timeout": 0, 00:12:39.199 "ctrlr_loss_timeout_sec": 0, 00:12:39.199 "reconnect_delay_sec": 0, 00:12:39.199 "fast_io_fail_timeout_sec": 0, 00:12:39.199 "generate_uuids": false, 00:12:39.199 "transport_tos": 0, 00:12:39.199 "io_path_stat": false, 00:12:39.199 "allow_accel_sequence": false 00:12:39.199 } 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "method": "bdev_nvme_set_hotplug", 00:12:39.199 "params": { 00:12:39.199 "period_us": 100000, 00:12:39.199 "enable": false 00:12:39.199 } 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "method": "bdev_malloc_create", 00:12:39.199 "params": { 00:12:39.199 "name": "malloc0", 00:12:39.199 "num_blocks": 8192, 00:12:39.199 "block_size": 4096, 00:12:39.199 "physical_block_size": 4096, 00:12:39.199 "uuid": "e2fe1715-2dee-4d98-8a48-dc5afc879087", 00:12:39.199 "optimal_io_boundary": 0 00:12:39.199 } 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "method": "bdev_wait_for_examine" 00:12:39.199 } 00:12:39.199 ] 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "subsystem": "scsi", 00:12:39.199 "config": null 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "subsystem": "scheduler", 00:12:39.199 "config": [ 00:12:39.199 { 00:12:39.199 "method": "framework_set_scheduler", 00:12:39.199 "params": { 00:12:39.199 "name": "static" 00:12:39.199 } 00:12:39.199 } 00:12:39.199 ] 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "subsystem": "vhost_scsi", 00:12:39.199 "config": [] 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "subsystem": "vhost_blk", 00:12:39.199 "config": [] 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "subsystem": "ublk", 00:12:39.199 "config": [ 00:12:39.199 { 00:12:39.199 "method": "ublk_create_target", 00:12:39.199 "params": { 00:12:39.199 "cpumask": "1" 00:12:39.199 } 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "method": "ublk_start_disk", 00:12:39.199 "params": { 00:12:39.199 "bdev_name": "malloc0", 00:12:39.199 "ublk_id": 0, 00:12:39.199 "num_queues": 1, 00:12:39.199 "queue_depth": 128 00:12:39.199 } 00:12:39.199 } 00:12:39.199 ] 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "subsystem": "nbd", 00:12:39.199 "config": [] 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "subsystem": "nvmf", 00:12:39.199 "config": [ 00:12:39.199 { 00:12:39.199 "method": "nvmf_set_config", 00:12:39.199 "params": { 00:12:39.199 "discovery_filter": "match_any", 00:12:39.199 "admin_cmd_passthru": { 00:12:39.199 "identify_ctrlr": false 00:12:39.199 } 00:12:39.199 } 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "method": "nvmf_set_max_subsystems", 00:12:39.199 "params": { 00:12:39.199 "max_subsystems": 1024 00:12:39.199 } 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "method": "nvmf_set_crdt", 00:12:39.199 "params": { 00:12:39.199 "crdt1": 0, 00:12:39.199 "crdt2": 0, 00:12:39.199 "crdt3": 0 00:12:39.199 } 00:12:39.199 } 00:12:39.199 ] 00:12:39.199 }, 00:12:39.199 { 00:12:39.199 "subsystem": "iscsi", 00:12:39.199 "config": [ 00:12:39.199 { 00:12:39.199 "method": "iscsi_set_options", 00:12:39.199 "params": { 00:12:39.199 "node_base": "iqn.2016-06.io.spdk", 00:12:39.199 "max_sessions": 128, 00:12:39.199 "max_connections_per_session": 2, 00:12:39.199 "max_queue_depth": 64, 00:12:39.199 "default_time2wait": 2, 00:12:39.199 "default_time2retain": 20, 00:12:39.199 "first_burst_length": 8192, 00:12:39.199 "immediate_data": true, 00:12:39.199 "allow_duplicated_isid": false, 00:12:39.199 "error_recovery_level": 0, 00:12:39.199 "nop_timeout": 60, 00:12:39.199 "nop_in_interval": 30, 00:12:39.199 "disable_chap": false, 00:12:39.199 "require_chap": false, 00:12:39.199 "mutual_chap": false, 00:12:39.199 "chap_group": 0, 00:12:39.199 "max_large_datain_per_connection": 64, 00:12:39.200 "max_r2t_per_connection": 4, 00:12:39.200 "pdu_pool_size": 36864, 00:12:39.200 "immediate_data_pool_size": 16384, 00:12:39.200 "data_out_pool_size": 2048 00:12:39.200 } 00:12:39.200 } 00:12:39.200 ] 00:12:39.200 } 00:12:39.200 ] 00:12:39.200 }' 00:12:39.200 10:40:09 -- ublk/ublk.sh@116 -- # killprocess 69086 00:12:39.200 10:40:09 -- common/autotest_common.sh@936 -- # '[' -z 69086 ']' 00:12:39.200 10:40:09 -- common/autotest_common.sh@940 -- # kill -0 69086 00:12:39.200 10:40:09 -- common/autotest_common.sh@941 -- # uname 00:12:39.200 10:40:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:39.200 10:40:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69086 00:12:39.200 10:40:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:39.200 killing process with pid 69086 00:12:39.200 10:40:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:39.200 10:40:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69086' 00:12:39.200 10:40:09 -- common/autotest_common.sh@955 -- # kill 69086 00:12:39.200 10:40:09 -- common/autotest_common.sh@960 -- # wait 69086 00:12:40.575 [2024-12-03 10:40:10.902409] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:40.575 [2024-12-03 10:40:10.940089] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:40.575 [2024-12-03 10:40:10.940194] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:40.575 [2024-12-03 10:40:10.948078] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:40.575 [2024-12-03 10:40:10.948125] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:40.575 [2024-12-03 10:40:10.948137] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:40.575 [2024-12-03 10:40:10.948160] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:40.575 [2024-12-03 10:40:10.948274] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:41.958 10:40:12 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:12:41.958 10:40:12 -- ublk/ublk.sh@119 -- # tgtpid=69149 00:12:41.958 10:40:12 -- ublk/ublk.sh@118 -- # echo '{ 00:12:41.958 "subsystems": [ 00:12:41.958 { 00:12:41.958 "subsystem": "iobuf", 00:12:41.958 "config": [ 00:12:41.958 { 00:12:41.958 "method": "iobuf_set_options", 00:12:41.958 "params": { 00:12:41.958 "small_pool_count": 8192, 00:12:41.958 "large_pool_count": 1024, 00:12:41.958 "small_bufsize": 8192, 00:12:41.958 "large_bufsize": 135168 00:12:41.958 } 00:12:41.958 } 00:12:41.958 ] 00:12:41.958 }, 00:12:41.958 { 00:12:41.958 "subsystem": "sock", 00:12:41.958 "config": [ 00:12:41.958 { 00:12:41.958 "method": "sock_impl_set_options", 00:12:41.958 "params": { 00:12:41.958 "impl_name": "posix", 00:12:41.958 "recv_buf_size": 2097152, 00:12:41.958 "send_buf_size": 2097152, 00:12:41.958 "enable_recv_pipe": true, 00:12:41.958 "enable_quickack": false, 00:12:41.958 "enable_placement_id": 0, 00:12:41.958 "enable_zerocopy_send_server": true, 00:12:41.958 "enable_zerocopy_send_client": false, 00:12:41.958 "zerocopy_threshold": 0, 00:12:41.958 "tls_version": 0, 00:12:41.958 "enable_ktls": false 00:12:41.958 } 00:12:41.958 }, 00:12:41.958 { 00:12:41.958 "method": "sock_impl_set_options", 00:12:41.958 "params": { 00:12:41.958 "impl_name": "ssl", 00:12:41.958 "recv_buf_size": 4096, 00:12:41.958 "send_buf_size": 4096, 00:12:41.958 "enable_recv_pipe": true, 00:12:41.958 "enable_quickack": false, 00:12:41.958 "enable_placement_id": 0, 00:12:41.958 "enable_zerocopy_send_server": true, 00:12:41.958 "enable_zerocopy_send_client": false, 00:12:41.958 "zerocopy_threshold": 0, 00:12:41.958 "tls_version": 0, 00:12:41.958 "enable_ktls": false 00:12:41.958 } 00:12:41.958 } 00:12:41.958 ] 00:12:41.958 }, 00:12:41.958 { 00:12:41.958 "subsystem": "vmd", 00:12:41.958 "config": [] 00:12:41.958 }, 00:12:41.958 { 00:12:41.958 "subsystem": "accel", 00:12:41.958 "config": [ 00:12:41.958 { 00:12:41.958 "method": "accel_set_options", 00:12:41.958 "params": { 00:12:41.958 "small_cache_size": 128, 00:12:41.958 "large_cache_size": 16, 00:12:41.958 "task_count": 2048, 00:12:41.958 "sequence_count": 2048, 00:12:41.958 "buf_count": 2048 00:12:41.958 } 00:12:41.958 } 00:12:41.958 ] 00:12:41.958 }, 00:12:41.958 { 00:12:41.958 "subsystem": "bdev", 00:12:41.958 "config": [ 00:12:41.958 { 00:12:41.958 "method": "bdev_set_options", 00:12:41.958 "params": { 00:12:41.958 "bdev_io_pool_size": 65535, 00:12:41.959 "bdev_io_cache_size": 256, 00:12:41.959 "bdev_auto_examine": true, 00:12:41.959 "iobuf_small_cache_size": 128, 00:12:41.959 "iobuf_large_cache_size": 16 00:12:41.959 } 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "method": "bdev_raid_set_options", 00:12:41.959 "params": { 00:12:41.959 "process_window_size_kb": 1024 00:12:41.959 } 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "method": "bdev_iscsi_set_options", 00:12:41.959 "params": { 00:12:41.959 "timeout_sec": 30 00:12:41.959 } 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "method": "bdev_nvme_set_options", 00:12:41.959 "params": { 00:12:41.959 "action_on_timeout": "none", 00:12:41.959 "timeout_us": 0, 00:12:41.959 "timeout_admin_us": 0, 00:12:41.959 "keep_alive_timeout_ms": 10000, 00:12:41.959 "transport_retry_count": 4, 00:12:41.959 "arbitration_burst": 0, 00:12:41.959 "low_priority_weight": 0, 00:12:41.959 "medium_priority_weight": 0, 00:12:41.959 "high_priority_weight": 0, 00:12:41.959 "nvme_adminq_poll_period_us": 10000, 00:12:41.959 "nvme_ioq_poll_period_us": 0, 00:12:41.959 "io_queue_requests": 0, 00:12:41.959 "delay_cmd_submit": true, 00:12:41.959 "bdev_retry_count": 3, 00:12:41.959 "transport_ack_timeout": 0, 00:12:41.959 "ctrlr_loss_timeout_sec": 0, 00:12:41.959 "reconnect_delay_sec": 0, 00:12:41.959 "fast_io_fail_timeout_sec": 0, 00:12:41.959 "generate_uuids": false, 00:12:41.959 "transport_tos": 0, 00:12:41.959 "io_path_stat": false, 00:12:41.959 "allow_accel_sequence": false 00:12:41.959 } 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "method": "bdev_nvme_set_hotplug", 00:12:41.959 "params": { 00:12:41.959 "period_us": 100000, 00:12:41.959 "enable": false 00:12:41.959 } 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "method": "bdev_malloc_create", 00:12:41.959 "params": { 00:12:41.959 "name": "malloc0", 00:12:41.959 "num_blocks": 8192, 00:12:41.959 "block_size": 4096, 00:12:41.959 "physical_block_size": 4096, 00:12:41.959 "uuid": "e2fe1715-2dee-4d98-8a48-dc5afc879087", 00:12:41.959 "optimal_io_boundary": 0 00:12:41.959 } 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "method": "bdev_wait_for_examine" 00:12:41.959 } 00:12:41.959 ] 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "subsystem": "scsi", 00:12:41.959 "config": null 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "subsystem": "scheduler", 00:12:41.959 "config": [ 00:12:41.959 { 00:12:41.959 "method": "framework_set_scheduler", 00:12:41.959 "params": { 00:12:41.959 "name": "static" 00:12:41.959 } 00:12:41.959 } 00:12:41.959 ] 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "subsystem": "vhost_scsi", 00:12:41.959 "config": [] 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "subsystem": "vhost_blk", 00:12:41.959 "config": [] 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "subsystem": "ublk", 00:12:41.959 "config": [ 00:12:41.959 { 00:12:41.959 "method": "ublk_create_target", 00:12:41.959 "params": { 00:12:41.959 "cpumask": "1" 00:12:41.959 } 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "method": "ublk_start_disk", 00:12:41.959 "params": { 00:12:41.959 "bdev_name": "malloc0", 00:12:41.959 "ublk_id": 0, 00:12:41.959 "num_queues": 1, 00:12:41.959 "queue_depth": 128 00:12:41.959 } 00:12:41.959 } 00:12:41.959 ] 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "subsystem": "nbd", 00:12:41.959 "config": [] 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "subsystem": "nvmf", 00:12:41.959 "config": [ 00:12:41.959 { 00:12:41.959 "method": "nvmf_set_config", 00:12:41.959 "params": { 00:12:41.959 "discovery_filter": "match_any", 00:12:41.959 "admin_cmd_passthru": { 00:12:41.959 "identify_ctrlr": false 00:12:41.959 } 00:12:41.959 } 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "method": "nvmf_set_max_subsystems", 00:12:41.959 "params": { 00:12:41.959 "max_subsystems": 1024 00:12:41.959 } 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "method": "nvmf_set_crdt", 00:12:41.959 "params": { 00:12:41.959 "crdt1": 0, 00:12:41.959 "crdt2": 0, 00:12:41.959 "crdt3": 0 00:12:41.959 } 00:12:41.959 } 00:12:41.959 ] 00:12:41.959 }, 00:12:41.959 { 00:12:41.959 "subsystem": "iscsi", 00:12:41.959 "config": [ 00:12:41.959 { 00:12:41.959 "method": "iscsi_set_options", 00:12:41.959 "params": { 00:12:41.959 "node_base": "iqn.2016-06.io.spdk", 00:12:41.959 "max_sessions": 128, 00:12:41.959 "max_connections_per_session": 2, 00:12:41.959 "max_queue_depth": 64, 00:12:41.959 "default_time2wait": 2, 00:12:41.959 "default_time2retain": 20, 00:12:41.959 "first_burst_length": 8192, 00:12:41.959 "immediate_data": true, 00:12:41.959 "allow_duplicated_isid": false, 00:12:41.959 "error_recovery_level": 0, 00:12:41.959 "nop_timeout": 60, 00:12:41.959 "nop_in_interval": 30, 00:12:41.959 "disable_chap": false, 00:12:41.959 "require_chap": false, 00:12:41.959 "mutual_chap": false, 00:12:41.959 "chap_group": 0, 00:12:41.959 "max_large_datain_per_connection": 64, 00:12:41.959 "max_r2t_per_connection": 4, 00:12:41.959 "pdu_pool_size": 36864, 00:12:41.959 "immediate_data_pool_size": 16384, 00:12:41.959 "data_out_pool_size": 2048 00:12:41.959 } 00:12:41.959 } 00:12:41.959 ] 00:12:41.959 } 00:12:41.959 ] 00:12:41.959 }' 00:12:41.959 10:40:12 -- ublk/ublk.sh@121 -- # waitforlisten 69149 00:12:41.959 10:40:12 -- common/autotest_common.sh@829 -- # '[' -z 69149 ']' 00:12:41.959 10:40:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.959 10:40:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.959 10:40:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.959 10:40:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.959 10:40:12 -- common/autotest_common.sh@10 -- # set +x 00:12:41.959 [2024-12-03 10:40:12.422073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:41.959 [2024-12-03 10:40:12.422497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69149 ] 00:12:42.220 [2024-12-03 10:40:12.576574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.220 [2024-12-03 10:40:12.815492] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:42.220 [2024-12-03 10:40:12.815746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.162 [2024-12-03 10:40:13.620898] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:43.162 [2024-12-03 10:40:13.628220] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:43.162 [2024-12-03 10:40:13.628315] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:43.162 [2024-12-03 10:40:13.628324] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:43.162 [2024-12-03 10:40:13.628332] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:43.162 [2024-12-03 10:40:13.637181] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:43.162 [2024-12-03 10:40:13.637207] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:43.162 [2024-12-03 10:40:13.644099] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:43.162 [2024-12-03 10:40:13.644219] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:43.163 [2024-12-03 10:40:13.661085] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:43.424 10:40:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.424 10:40:13 -- common/autotest_common.sh@862 -- # return 0 00:12:43.424 10:40:13 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:12:43.424 10:40:13 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:12:43.424 10:40:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.424 10:40:13 -- common/autotest_common.sh@10 -- # set +x 00:12:43.424 10:40:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.424 10:40:13 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:43.424 10:40:13 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:12:43.424 10:40:13 -- ublk/ublk.sh@125 -- # killprocess 69149 00:12:43.424 10:40:13 -- common/autotest_common.sh@936 -- # '[' -z 69149 ']' 00:12:43.424 10:40:13 -- common/autotest_common.sh@940 -- # kill -0 69149 00:12:43.424 10:40:13 -- common/autotest_common.sh@941 -- # uname 00:12:43.424 10:40:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:43.424 10:40:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69149 00:12:43.424 10:40:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:43.424 10:40:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:43.424 killing process with pid 69149 00:12:43.424 10:40:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69149' 00:12:43.424 10:40:14 -- common/autotest_common.sh@955 -- # kill 69149 00:12:43.424 10:40:14 -- common/autotest_common.sh@960 -- # wait 69149 00:12:44.367 [2024-12-03 10:40:14.969676] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:44.631 [2024-12-03 10:40:15.008135] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:44.631 [2024-12-03 10:40:15.008229] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:44.631 [2024-12-03 10:40:15.016079] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:44.631 [2024-12-03 10:40:15.016119] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:44.631 [2024-12-03 10:40:15.016125] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:44.631 [2024-12-03 10:40:15.016145] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:44.631 [2024-12-03 10:40:15.016261] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:46.016 10:40:16 -- ublk/ublk.sh@126 -- # trap - EXIT 00:12:46.016 00:12:46.016 real 0m8.725s 00:12:46.016 user 0m6.068s 00:12:46.016 sys 0m3.649s 00:12:46.016 ************************************ 00:12:46.016 END TEST test_save_ublk_config 00:12:46.016 ************************************ 00:12:46.016 10:40:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:46.016 10:40:16 -- common/autotest_common.sh@10 -- # set +x 00:12:46.016 10:40:16 -- ublk/ublk.sh@139 -- # spdk_pid=69232 00:12:46.016 10:40:16 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:46.016 10:40:16 -- ublk/ublk.sh@141 -- # waitforlisten 69232 00:12:46.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.016 10:40:16 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:12:46.016 10:40:16 -- common/autotest_common.sh@829 -- # '[' -z 69232 ']' 00:12:46.016 10:40:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.016 10:40:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.016 10:40:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.016 10:40:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.016 10:40:16 -- common/autotest_common.sh@10 -- # set +x 00:12:46.016 [2024-12-03 10:40:16.466398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:46.016 [2024-12-03 10:40:16.466511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69232 ] 00:12:46.016 [2024-12-03 10:40:16.612565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:46.277 [2024-12-03 10:40:16.759877] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:46.277 [2024-12-03 10:40:16.760155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.277 [2024-12-03 10:40:16.760187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.848 10:40:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.848 10:40:17 -- common/autotest_common.sh@862 -- # return 0 00:12:46.848 10:40:17 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:12:46.848 10:40:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:46.848 10:40:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:46.848 10:40:17 -- common/autotest_common.sh@10 -- # set +x 00:12:46.848 ************************************ 00:12:46.848 START TEST test_create_ublk 00:12:46.848 ************************************ 00:12:46.848 10:40:17 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:12:46.848 10:40:17 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:12:46.848 10:40:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.848 10:40:17 -- common/autotest_common.sh@10 -- # set +x 00:12:46.848 [2024-12-03 10:40:17.303533] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:46.848 10:40:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.848 10:40:17 -- ublk/ublk.sh@33 -- # ublk_target= 00:12:46.848 10:40:17 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:12:46.848 10:40:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.848 10:40:17 -- common/autotest_common.sh@10 -- # set +x 00:12:46.848 10:40:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.848 10:40:17 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:12:46.848 10:40:17 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:12:46.848 10:40:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.848 10:40:17 -- common/autotest_common.sh@10 -- # set +x 00:12:47.110 [2024-12-03 10:40:17.462185] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:12:47.110 [2024-12-03 10:40:17.462483] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:12:47.110 [2024-12-03 10:40:17.462495] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:47.110 [2024-12-03 10:40:17.462502] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:47.110 [2024-12-03 10:40:17.471242] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:47.110 [2024-12-03 10:40:17.471265] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:47.110 [2024-12-03 10:40:17.478081] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:47.110 [2024-12-03 10:40:17.494227] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:47.110 [2024-12-03 10:40:17.517073] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:47.110 10:40:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.110 10:40:17 -- ublk/ublk.sh@37 -- # ublk_id=0 00:12:47.110 10:40:17 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:12:47.110 10:40:17 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:12:47.110 10:40:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.110 10:40:17 -- common/autotest_common.sh@10 -- # set +x 00:12:47.110 10:40:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.110 10:40:17 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:12:47.110 { 00:12:47.110 "ublk_device": "/dev/ublkb0", 00:12:47.110 "id": 0, 00:12:47.110 "queue_depth": 512, 00:12:47.110 "num_queues": 4, 00:12:47.110 "bdev_name": "Malloc0" 00:12:47.110 } 00:12:47.110 ]' 00:12:47.110 10:40:17 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:12:47.110 10:40:17 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:47.110 10:40:17 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:12:47.110 10:40:17 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:12:47.110 10:40:17 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:12:47.110 10:40:17 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:12:47.110 10:40:17 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:12:47.110 10:40:17 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:12:47.110 10:40:17 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:12:47.110 10:40:17 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:12:47.110 10:40:17 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:12:47.110 10:40:17 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:12:47.110 10:40:17 -- lvol/common.sh@41 -- # local offset=0 00:12:47.110 10:40:17 -- lvol/common.sh@42 -- # local size=134217728 00:12:47.110 10:40:17 -- lvol/common.sh@43 -- # local rw=write 00:12:47.110 10:40:17 -- lvol/common.sh@44 -- # local pattern=0xcc 00:12:47.110 10:40:17 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:12:47.110 10:40:17 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:12:47.110 10:40:17 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:12:47.110 10:40:17 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:12:47.110 10:40:17 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:12:47.110 10:40:17 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:12:47.371 fio: verification read phase will never start because write phase uses all of runtime 00:12:47.371 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:12:47.371 fio-3.35 00:12:47.371 Starting 1 process 00:12:57.369 00:12:57.369 fio_test: (groupid=0, jobs=1): err= 0: pid=69277: Tue Dec 3 10:40:27 2024 00:12:57.369 write: IOPS=17.1k, BW=66.7MiB/s (70.0MB/s)(667MiB/10001msec); 0 zone resets 00:12:57.369 clat (usec): min=33, max=10172, avg=57.75, stdev=133.63 00:12:57.369 lat (usec): min=33, max=10188, avg=58.19, stdev=133.65 00:12:57.369 clat percentiles (usec): 00:12:57.369 | 1.00th=[ 37], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 42], 00:12:57.369 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 55], 00:12:57.369 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 65], 95.00th=[ 69], 00:12:57.369 | 99.00th=[ 80], 99.50th=[ 95], 99.90th=[ 3064], 99.95th=[ 3490], 00:12:57.369 | 99.99th=[ 3884] 00:12:57.369 bw ( KiB/s): min=34568, max=90016, per=100.00%, avg=68597.89, stdev=14981.16, samples=19 00:12:57.369 iops : min= 8642, max=22504, avg=17149.47, stdev=3745.29, samples=19 00:12:57.369 lat (usec) : 50=45.26%, 100=54.27%, 250=0.21%, 500=0.02%, 750=0.01% 00:12:57.369 lat (usec) : 1000=0.02% 00:12:57.369 lat (msec) : 2=0.05%, 4=0.16%, 10=0.01%, 20=0.01% 00:12:57.369 cpu : usr=2.22%, sys=11.15%, ctx=170928, majf=0, minf=796 00:12:57.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:57.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.369 issued rwts: total=0,170876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:57.369 00:12:57.369 Run status group 0 (all jobs): 00:12:57.369 WRITE: bw=66.7MiB/s (70.0MB/s), 66.7MiB/s-66.7MiB/s (70.0MB/s-70.0MB/s), io=667MiB (700MB), run=10001-10001msec 00:12:57.369 00:12:57.369 Disk stats (read/write): 00:12:57.369 ublkb0: ios=0/169270, merge=0/0, ticks=0/8544, in_queue=8545, util=98.96% 00:12:57.369 10:40:27 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:12:57.369 10:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.369 10:40:27 -- common/autotest_common.sh@10 -- # set +x 00:12:57.369 [2024-12-03 10:40:27.934106] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:57.628 [2024-12-03 10:40:27.983116] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:57.628 [2024-12-03 10:40:27.983821] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:57.628 [2024-12-03 10:40:27.994124] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:57.628 [2024-12-03 10:40:27.997325] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:57.628 [2024-12-03 10:40:27.997340] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:57.628 10:40:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.628 10:40:27 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:12:57.628 10:40:27 -- common/autotest_common.sh@650 -- # local es=0 00:12:57.628 10:40:27 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:12:57.628 10:40:27 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:57.628 10:40:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.628 10:40:27 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:57.628 10:40:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.628 10:40:28 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:12:57.628 10:40:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.628 10:40:28 -- common/autotest_common.sh@10 -- # set +x 00:12:57.628 [2024-12-03 10:40:28.009153] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:12:57.628 request: 00:12:57.628 { 00:12:57.628 "ublk_id": 0, 00:12:57.628 "method": "ublk_stop_disk", 00:12:57.628 "req_id": 1 00:12:57.628 } 00:12:57.628 Got JSON-RPC error response 00:12:57.628 response: 00:12:57.628 { 00:12:57.628 "code": -19, 00:12:57.628 "message": "No such device" 00:12:57.628 } 00:12:57.628 10:40:28 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:57.628 10:40:28 -- common/autotest_common.sh@653 -- # es=1 00:12:57.628 10:40:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:57.628 10:40:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:57.628 10:40:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:57.628 10:40:28 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:12:57.628 10:40:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.628 10:40:28 -- common/autotest_common.sh@10 -- # set +x 00:12:57.628 [2024-12-03 10:40:28.025125] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:57.628 [2024-12-03 10:40:28.033072] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:57.628 [2024-12-03 10:40:28.033100] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:12:57.628 10:40:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.628 10:40:28 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:57.628 10:40:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.628 10:40:28 -- common/autotest_common.sh@10 -- # set +x 00:12:57.886 10:40:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.886 10:40:28 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:12:57.886 10:40:28 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:57.886 10:40:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.886 10:40:28 -- common/autotest_common.sh@10 -- # set +x 00:12:57.886 10:40:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.886 10:40:28 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:57.886 10:40:28 -- lvol/common.sh@26 -- # jq length 00:12:57.886 10:40:28 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:57.886 10:40:28 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:57.886 10:40:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.886 10:40:28 -- common/autotest_common.sh@10 -- # set +x 00:12:57.886 10:40:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.886 10:40:28 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:57.886 10:40:28 -- lvol/common.sh@28 -- # jq length 00:12:57.886 ************************************ 00:12:57.886 END TEST test_create_ublk 00:12:57.886 ************************************ 00:12:57.886 10:40:28 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:57.886 00:12:57.886 real 0m11.196s 00:12:57.886 user 0m0.528s 00:12:57.886 sys 0m1.186s 00:12:57.886 10:40:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:57.886 10:40:28 -- common/autotest_common.sh@10 -- # set +x 00:12:58.145 10:40:28 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:12:58.145 10:40:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:58.145 10:40:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:58.145 10:40:28 -- common/autotest_common.sh@10 -- # set +x 00:12:58.145 ************************************ 00:12:58.145 START TEST test_create_multi_ublk 00:12:58.145 ************************************ 00:12:58.145 10:40:28 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:12:58.145 10:40:28 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:12:58.145 10:40:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.145 10:40:28 -- common/autotest_common.sh@10 -- # set +x 00:12:58.145 [2024-12-03 10:40:28.546569] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:58.145 10:40:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.145 10:40:28 -- ublk/ublk.sh@62 -- # ublk_target= 00:12:58.145 10:40:28 -- ublk/ublk.sh@64 -- # seq 0 3 00:12:58.145 10:40:28 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.145 10:40:28 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:12:58.145 10:40:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.145 10:40:28 -- common/autotest_common.sh@10 -- # set +x 00:12:58.403 10:40:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.403 10:40:28 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:12:58.403 10:40:28 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:12:58.403 10:40:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.403 10:40:28 -- common/autotest_common.sh@10 -- # set +x 00:12:58.403 [2024-12-03 10:40:28.773176] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:12:58.403 [2024-12-03 10:40:28.773475] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:12:58.403 [2024-12-03 10:40:28.773487] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:58.403 [2024-12-03 10:40:28.773493] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:58.403 [2024-12-03 10:40:28.786238] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:58.403 [2024-12-03 10:40:28.786260] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:58.403 [2024-12-03 10:40:28.797087] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:58.403 [2024-12-03 10:40:28.797577] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:58.403 [2024-12-03 10:40:28.810325] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:58.403 10:40:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.403 10:40:28 -- ublk/ublk.sh@68 -- # ublk_id=0 00:12:58.403 10:40:28 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.403 10:40:28 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:12:58.403 10:40:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.403 10:40:28 -- common/autotest_common.sh@10 -- # set +x 00:12:58.662 10:40:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.662 10:40:29 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:12:58.662 10:40:29 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:12:58.662 10:40:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.662 10:40:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.662 [2024-12-03 10:40:29.039157] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:12:58.662 [2024-12-03 10:40:29.039457] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:12:58.662 [2024-12-03 10:40:29.039471] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:12:58.662 [2024-12-03 10:40:29.039476] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:12:58.662 [2024-12-03 10:40:29.047090] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:58.662 [2024-12-03 10:40:29.047106] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:58.662 [2024-12-03 10:40:29.055076] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:58.662 [2024-12-03 10:40:29.055577] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:12:58.662 [2024-12-03 10:40:29.072080] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:12:58.662 10:40:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.662 10:40:29 -- ublk/ublk.sh@68 -- # ublk_id=1 00:12:58.662 10:40:29 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.662 10:40:29 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:12:58.662 10:40:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.662 10:40:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.662 10:40:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.662 10:40:29 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:12:58.662 10:40:29 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:12:58.662 10:40:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.662 10:40:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.662 [2024-12-03 10:40:29.239180] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:12:58.662 [2024-12-03 10:40:29.239475] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:12:58.662 [2024-12-03 10:40:29.239486] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:12:58.662 [2024-12-03 10:40:29.239494] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:12:58.662 [2024-12-03 10:40:29.247098] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:58.662 [2024-12-03 10:40:29.247118] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:58.662 [2024-12-03 10:40:29.255079] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:58.662 [2024-12-03 10:40:29.255580] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:12:58.662 [2024-12-03 10:40:29.264118] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:12:58.662 10:40:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.662 10:40:29 -- ublk/ublk.sh@68 -- # ublk_id=2 00:12:58.662 10:40:29 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.921 10:40:29 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:12:58.921 10:40:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.921 10:40:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.921 10:40:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.921 10:40:29 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:12:58.921 10:40:29 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:12:58.921 10:40:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.921 10:40:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.921 [2024-12-03 10:40:29.431170] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:12:58.921 [2024-12-03 10:40:29.431467] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:12:58.921 [2024-12-03 10:40:29.431479] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:12:58.921 [2024-12-03 10:40:29.431484] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:12:58.921 [2024-12-03 10:40:29.439085] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:58.921 [2024-12-03 10:40:29.439102] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:58.921 [2024-12-03 10:40:29.447091] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:58.921 [2024-12-03 10:40:29.447582] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:12:58.921 [2024-12-03 10:40:29.456103] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:12:58.921 10:40:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.921 10:40:29 -- ublk/ublk.sh@68 -- # ublk_id=3 00:12:58.921 10:40:29 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:12:58.921 10:40:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.921 10:40:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.921 10:40:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.921 10:40:29 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:12:58.921 { 00:12:58.921 "ublk_device": "/dev/ublkb0", 00:12:58.921 "id": 0, 00:12:58.921 "queue_depth": 512, 00:12:58.921 "num_queues": 4, 00:12:58.921 "bdev_name": "Malloc0" 00:12:58.921 }, 00:12:58.921 { 00:12:58.921 "ublk_device": "/dev/ublkb1", 00:12:58.921 "id": 1, 00:12:58.921 "queue_depth": 512, 00:12:58.921 "num_queues": 4, 00:12:58.921 "bdev_name": "Malloc1" 00:12:58.921 }, 00:12:58.921 { 00:12:58.921 "ublk_device": "/dev/ublkb2", 00:12:58.921 "id": 2, 00:12:58.921 "queue_depth": 512, 00:12:58.921 "num_queues": 4, 00:12:58.921 "bdev_name": "Malloc2" 00:12:58.921 }, 00:12:58.921 { 00:12:58.921 "ublk_device": "/dev/ublkb3", 00:12:58.921 "id": 3, 00:12:58.921 "queue_depth": 512, 00:12:58.921 "num_queues": 4, 00:12:58.921 "bdev_name": "Malloc3" 00:12:58.921 } 00:12:58.921 ]' 00:12:58.921 10:40:29 -- ublk/ublk.sh@72 -- # seq 0 3 00:12:58.921 10:40:29 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.921 10:40:29 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:12:58.921 10:40:29 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:58.921 10:40:29 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:12:59.179 10:40:29 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:12:59.179 10:40:29 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:12:59.179 10:40:29 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:59.179 10:40:29 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:12:59.179 10:40:29 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:59.179 10:40:29 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:12:59.179 10:40:29 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:12:59.179 10:40:29 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:59.179 10:40:29 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:12:59.179 10:40:29 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:12:59.179 10:40:29 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:12:59.179 10:40:29 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:12:59.179 10:40:29 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:12:59.179 10:40:29 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:59.179 10:40:29 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:12:59.179 10:40:29 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:59.179 10:40:29 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:12:59.438 10:40:29 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:12:59.438 10:40:29 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:59.438 10:40:29 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:12:59.438 10:40:29 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:12:59.438 10:40:29 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:12:59.438 10:40:29 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:12:59.438 10:40:29 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:12:59.438 10:40:29 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:59.438 10:40:29 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:12:59.438 10:40:29 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:59.438 10:40:29 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:12:59.438 10:40:29 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:12:59.438 10:40:29 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:59.438 10:40:29 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:12:59.438 10:40:30 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:12:59.438 10:40:30 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:12:59.438 10:40:30 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:12:59.438 10:40:30 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:12:59.696 10:40:30 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:59.696 10:40:30 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:12:59.696 10:40:30 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:59.696 10:40:30 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:12:59.696 10:40:30 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:12:59.696 10:40:30 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:12:59.696 10:40:30 -- ublk/ublk.sh@85 -- # seq 0 3 00:12:59.696 10:40:30 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:59.696 10:40:30 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:12:59.696 10:40:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.696 10:40:30 -- common/autotest_common.sh@10 -- # set +x 00:12:59.696 [2024-12-03 10:40:30.159156] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:59.696 [2024-12-03 10:40:30.200611] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:59.696 [2024-12-03 10:40:30.201722] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:59.696 [2024-12-03 10:40:30.207086] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:59.696 [2024-12-03 10:40:30.207341] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:59.696 [2024-12-03 10:40:30.207354] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:59.696 10:40:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.696 10:40:30 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:59.696 10:40:30 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:12:59.696 10:40:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.696 10:40:30 -- common/autotest_common.sh@10 -- # set +x 00:12:59.697 [2024-12-03 10:40:30.223135] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:12:59.697 [2024-12-03 10:40:30.255121] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:59.697 [2024-12-03 10:40:30.255931] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:12:59.697 [2024-12-03 10:40:30.264121] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:59.697 [2024-12-03 10:40:30.264375] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:12:59.697 [2024-12-03 10:40:30.264388] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:12:59.697 10:40:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.697 10:40:30 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:59.697 10:40:30 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:12:59.697 10:40:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.697 10:40:30 -- common/autotest_common.sh@10 -- # set +x 00:12:59.697 [2024-12-03 10:40:30.278132] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:12:59.955 [2024-12-03 10:40:30.311080] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:59.955 [2024-12-03 10:40:30.311881] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:12:59.955 [2024-12-03 10:40:30.320117] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:59.955 [2024-12-03 10:40:30.320362] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:12:59.955 [2024-12-03 10:40:30.320377] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:12:59.955 10:40:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.955 10:40:30 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:59.955 10:40:30 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:12:59.955 10:40:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.955 10:40:30 -- common/autotest_common.sh@10 -- # set +x 00:12:59.955 [2024-12-03 10:40:30.335146] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:12:59.955 [2024-12-03 10:40:30.381113] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:59.955 [2024-12-03 10:40:30.381803] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:12:59.955 [2024-12-03 10:40:30.391079] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:59.955 [2024-12-03 10:40:30.391337] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:12:59.955 [2024-12-03 10:40:30.391349] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:12:59.955 10:40:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.955 10:40:30 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:12:59.955 [2024-12-03 10:40:30.561152] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:00.213 [2024-12-03 10:40:30.567530] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:00.213 [2024-12-03 10:40:30.567558] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:00.213 10:40:30 -- ublk/ublk.sh@93 -- # seq 0 3 00:13:00.213 10:40:30 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:00.213 10:40:30 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:00.213 10:40:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.213 10:40:30 -- common/autotest_common.sh@10 -- # set +x 00:13:00.471 10:40:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.471 10:40:30 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:00.471 10:40:30 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:00.471 10:40:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.471 10:40:30 -- common/autotest_common.sh@10 -- # set +x 00:13:00.729 10:40:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.729 10:40:31 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:00.729 10:40:31 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:00.729 10:40:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.729 10:40:31 -- common/autotest_common.sh@10 -- # set +x 00:13:01.294 10:40:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.294 10:40:31 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:01.294 10:40:31 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:01.294 10:40:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.294 10:40:31 -- common/autotest_common.sh@10 -- # set +x 00:13:01.552 10:40:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.552 10:40:31 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:01.552 10:40:31 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:01.552 10:40:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.552 10:40:31 -- common/autotest_common.sh@10 -- # set +x 00:13:01.552 10:40:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.552 10:40:32 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:01.552 10:40:32 -- lvol/common.sh@26 -- # jq length 00:13:01.552 10:40:32 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:01.552 10:40:32 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:01.552 10:40:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.552 10:40:32 -- common/autotest_common.sh@10 -- # set +x 00:13:01.552 10:40:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.552 10:40:32 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:01.552 10:40:32 -- lvol/common.sh@28 -- # jq length 00:13:01.552 ************************************ 00:13:01.552 END TEST test_create_multi_ublk 00:13:01.552 ************************************ 00:13:01.552 10:40:32 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:01.552 00:13:01.552 real 0m3.556s 00:13:01.552 user 0m0.832s 00:13:01.552 sys 0m0.150s 00:13:01.552 10:40:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.552 10:40:32 -- common/autotest_common.sh@10 -- # set +x 00:13:01.552 10:40:32 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:01.552 10:40:32 -- ublk/ublk.sh@147 -- # cleanup 00:13:01.552 10:40:32 -- ublk/ublk.sh@130 -- # killprocess 69232 00:13:01.552 10:40:32 -- common/autotest_common.sh@936 -- # '[' -z 69232 ']' 00:13:01.552 10:40:32 -- common/autotest_common.sh@940 -- # kill -0 69232 00:13:01.552 10:40:32 -- common/autotest_common.sh@941 -- # uname 00:13:01.552 10:40:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:01.552 10:40:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69232 00:13:01.552 killing process with pid 69232 00:13:01.552 10:40:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:01.552 10:40:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:01.552 10:40:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69232' 00:13:01.552 10:40:32 -- common/autotest_common.sh@955 -- # kill 69232 00:13:01.552 10:40:32 -- common/autotest_common.sh@960 -- # wait 69232 00:13:02.118 [2024-12-03 10:40:32.677748] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:02.118 [2024-12-03 10:40:32.677795] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:03.056 00:13:03.056 real 0m25.871s 00:13:03.056 user 0m35.623s 00:13:03.056 sys 0m10.404s 00:13:03.056 10:40:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:03.056 ************************************ 00:13:03.056 10:40:33 -- common/autotest_common.sh@10 -- # set +x 00:13:03.056 END TEST ublk 00:13:03.056 ************************************ 00:13:03.056 10:40:33 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:03.057 10:40:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:03.057 10:40:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.057 10:40:33 -- common/autotest_common.sh@10 -- # set +x 00:13:03.057 ************************************ 00:13:03.057 START TEST ublk_recovery 00:13:03.057 ************************************ 00:13:03.057 10:40:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:03.057 * Looking for test storage... 00:13:03.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:03.057 10:40:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:03.057 10:40:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:03.057 10:40:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:03.057 10:40:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:03.057 10:40:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:03.057 10:40:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:03.057 10:40:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:03.057 10:40:33 -- scripts/common.sh@335 -- # IFS=.-: 00:13:03.057 10:40:33 -- scripts/common.sh@335 -- # read -ra ver1 00:13:03.057 10:40:33 -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.057 10:40:33 -- scripts/common.sh@336 -- # read -ra ver2 00:13:03.057 10:40:33 -- scripts/common.sh@337 -- # local 'op=<' 00:13:03.057 10:40:33 -- scripts/common.sh@339 -- # ver1_l=2 00:13:03.057 10:40:33 -- scripts/common.sh@340 -- # ver2_l=1 00:13:03.057 10:40:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:03.057 10:40:33 -- scripts/common.sh@343 -- # case "$op" in 00:13:03.057 10:40:33 -- scripts/common.sh@344 -- # : 1 00:13:03.057 10:40:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:03.057 10:40:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.057 10:40:33 -- scripts/common.sh@364 -- # decimal 1 00:13:03.057 10:40:33 -- scripts/common.sh@352 -- # local d=1 00:13:03.057 10:40:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.057 10:40:33 -- scripts/common.sh@354 -- # echo 1 00:13:03.057 10:40:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:03.057 10:40:33 -- scripts/common.sh@365 -- # decimal 2 00:13:03.057 10:40:33 -- scripts/common.sh@352 -- # local d=2 00:13:03.057 10:40:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.057 10:40:33 -- scripts/common.sh@354 -- # echo 2 00:13:03.057 10:40:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:03.057 10:40:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:03.057 10:40:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:03.057 10:40:33 -- scripts/common.sh@367 -- # return 0 00:13:03.057 10:40:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.057 10:40:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:03.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.057 --rc genhtml_branch_coverage=1 00:13:03.057 --rc genhtml_function_coverage=1 00:13:03.057 --rc genhtml_legend=1 00:13:03.057 --rc geninfo_all_blocks=1 00:13:03.057 --rc geninfo_unexecuted_blocks=1 00:13:03.057 00:13:03.057 ' 00:13:03.057 10:40:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:03.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.057 --rc genhtml_branch_coverage=1 00:13:03.057 --rc genhtml_function_coverage=1 00:13:03.057 --rc genhtml_legend=1 00:13:03.057 --rc geninfo_all_blocks=1 00:13:03.057 --rc geninfo_unexecuted_blocks=1 00:13:03.057 00:13:03.057 ' 00:13:03.057 10:40:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:03.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.057 --rc genhtml_branch_coverage=1 00:13:03.057 --rc genhtml_function_coverage=1 00:13:03.057 --rc genhtml_legend=1 00:13:03.057 --rc geninfo_all_blocks=1 00:13:03.057 --rc geninfo_unexecuted_blocks=1 00:13:03.057 00:13:03.057 ' 00:13:03.057 10:40:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:03.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.057 --rc genhtml_branch_coverage=1 00:13:03.057 --rc genhtml_function_coverage=1 00:13:03.057 --rc genhtml_legend=1 00:13:03.057 --rc geninfo_all_blocks=1 00:13:03.057 --rc geninfo_unexecuted_blocks=1 00:13:03.057 00:13:03.057 ' 00:13:03.057 10:40:33 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:03.057 10:40:33 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:03.057 10:40:33 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:03.057 10:40:33 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:03.057 10:40:33 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:03.057 10:40:33 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:03.057 10:40:33 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:03.057 10:40:33 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:03.057 10:40:33 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:03.057 10:40:33 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:03.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.057 10:40:33 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=69623 00:13:03.057 10:40:33 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:03.057 10:40:33 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 69623 00:13:03.057 10:40:33 -- common/autotest_common.sh@829 -- # '[' -z 69623 ']' 00:13:03.057 10:40:33 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:03.057 10:40:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.057 10:40:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.057 10:40:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.057 10:40:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.057 10:40:33 -- common/autotest_common.sh@10 -- # set +x 00:13:03.057 [2024-12-03 10:40:33.588134] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:03.057 [2024-12-03 10:40:33.588753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69623 ] 00:13:03.317 [2024-12-03 10:40:33.736619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:03.317 [2024-12-03 10:40:33.878017] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:03.317 [2024-12-03 10:40:33.878446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.317 [2024-12-03 10:40:33.878580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.884 10:40:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:03.884 10:40:34 -- common/autotest_common.sh@862 -- # return 0 00:13:03.884 10:40:34 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:03.884 10:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.884 10:40:34 -- common/autotest_common.sh@10 -- # set +x 00:13:03.884 [2024-12-03 10:40:34.397549] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:03.884 10:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.884 10:40:34 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:03.884 10:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.884 10:40:34 -- common/autotest_common.sh@10 -- # set +x 00:13:03.884 malloc0 00:13:03.884 10:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.884 10:40:34 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:03.884 10:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.884 10:40:34 -- common/autotest_common.sh@10 -- # set +x 00:13:03.884 [2024-12-03 10:40:34.484183] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:03.884 [2024-12-03 10:40:34.484269] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:03.884 [2024-12-03 10:40:34.484275] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:03.884 [2024-12-03 10:40:34.484282] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:03.884 [2024-12-03 10:40:34.493160] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:03.884 [2024-12-03 10:40:34.493181] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:04.143 [2024-12-03 10:40:34.500082] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:04.143 [2024-12-03 10:40:34.500197] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:04.143 [2024-12-03 10:40:34.516094] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:04.143 1 00:13:04.143 10:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.143 10:40:34 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:05.117 10:40:35 -- ublk/ublk_recovery.sh@31 -- # fio_proc=69658 00:13:05.117 10:40:35 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:05.117 10:40:35 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:05.117 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:05.117 fio-3.35 00:13:05.117 Starting 1 process 00:13:10.410 10:40:40 -- ublk/ublk_recovery.sh@36 -- # kill -9 69623 00:13:10.410 10:40:40 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:15.700 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 69623 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:15.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.700 10:40:45 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=69774 00:13:15.700 10:40:45 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:15.700 10:40:45 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 69774 00:13:15.700 10:40:45 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:15.700 10:40:45 -- common/autotest_common.sh@829 -- # '[' -z 69774 ']' 00:13:15.700 10:40:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.700 10:40:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.700 10:40:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.700 10:40:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.700 10:40:45 -- common/autotest_common.sh@10 -- # set +x 00:13:15.700 [2024-12-03 10:40:45.607849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:15.701 [2024-12-03 10:40:45.607962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69774 ] 00:13:15.701 [2024-12-03 10:40:45.757089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:15.701 [2024-12-03 10:40:45.965889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:15.701 [2024-12-03 10:40:45.966327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.701 [2024-12-03 10:40:45.966408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.636 10:40:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.636 10:40:47 -- common/autotest_common.sh@862 -- # return 0 00:13:16.636 10:40:47 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:16.636 10:40:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.636 10:40:47 -- common/autotest_common.sh@10 -- # set +x 00:13:16.636 [2024-12-03 10:40:47.104934] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:16.636 10:40:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.636 10:40:47 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:16.636 10:40:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.636 10:40:47 -- common/autotest_common.sh@10 -- # set +x 00:13:16.636 malloc0 00:13:16.636 10:40:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.636 10:40:47 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:16.636 10:40:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.636 10:40:47 -- common/autotest_common.sh@10 -- # set +x 00:13:16.636 [2024-12-03 10:40:47.207198] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:16.636 [2024-12-03 10:40:47.207235] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:16.636 [2024-12-03 10:40:47.207243] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:16.636 [2024-12-03 10:40:47.215135] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:16.636 [2024-12-03 10:40:47.215154] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:16.636 [2024-12-03 10:40:47.215230] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:16.636 1 00:13:16.636 10:40:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.636 10:40:47 -- ublk/ublk_recovery.sh@52 -- # wait 69658 00:13:43.167 [2024-12-03 10:41:11.031085] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:13:43.167 [2024-12-03 10:41:11.037610] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:13:43.167 [2024-12-03 10:41:11.045228] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:13:43.167 [2024-12-03 10:41:11.045251] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:09.706 00:14:09.706 fio_test: (groupid=0, jobs=1): err= 0: pid=69661: Tue Dec 3 10:41:35 2024 00:14:09.706 read: IOPS=14.3k, BW=55.7MiB/s (58.5MB/s)(3345MiB/60002msec) 00:14:09.706 slat (nsec): min=1014, max=213640, avg=5228.69, stdev=1356.29 00:14:09.706 clat (usec): min=698, max=30525k, avg=4011.69, stdev=237848.78 00:14:09.706 lat (usec): min=703, max=30525k, avg=4016.91, stdev=237848.77 00:14:09.706 clat percentiles (usec): 00:14:09.706 | 1.00th=[ 1778], 5.00th=[ 1860], 10.00th=[ 1893], 20.00th=[ 1926], 00:14:09.706 | 30.00th=[ 1958], 40.00th=[ 1991], 50.00th=[ 2040], 60.00th=[ 2089], 00:14:09.706 | 70.00th=[ 2114], 80.00th=[ 2147], 90.00th=[ 2180], 95.00th=[ 3130], 00:14:09.706 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 7111], 99.95th=[ 8225], 00:14:09.706 | 99.99th=[13304] 00:14:09.706 bw ( KiB/s): min=24288, max=125368, per=100.00%, avg=114300.34, stdev=16358.12, samples=59 00:14:09.706 iops : min= 6072, max=31342, avg=28575.08, stdev=4089.53, samples=59 00:14:09.706 write: IOPS=14.2k, BW=55.7MiB/s (58.4MB/s)(3340MiB/60002msec); 0 zone resets 00:14:09.706 slat (nsec): min=932, max=252350, avg=5357.91, stdev=1451.69 00:14:09.706 clat (usec): min=680, max=30525k, avg=4953.23, stdev=287748.61 00:14:09.706 lat (usec): min=685, max=30525k, avg=4958.59, stdev=287748.61 00:14:09.706 clat percentiles (usec): 00:14:09.706 | 1.00th=[ 1827], 5.00th=[ 1958], 10.00th=[ 1975], 20.00th=[ 2008], 00:14:09.706 | 30.00th=[ 2040], 40.00th=[ 2073], 50.00th=[ 2147], 60.00th=[ 2180], 00:14:09.706 | 70.00th=[ 2212], 80.00th=[ 2245], 90.00th=[ 2278], 95.00th=[ 3064], 00:14:09.706 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 7177], 99.95th=[ 8291], 00:14:09.706 | 99.99th=[13435] 00:14:09.706 bw ( KiB/s): min=24552, max=125912, per=100.00%, avg=114154.44, stdev=16413.21, samples=59 00:14:09.706 iops : min= 6138, max=31478, avg=28538.61, stdev=4103.30, samples=59 00:14:09.706 lat (usec) : 750=0.01%, 1000=0.01% 00:14:09.706 lat (msec) : 2=29.58%, 4=67.46%, 10=2.92%, 20=0.03%, >=2000=0.01% 00:14:09.706 cpu : usr=3.21%, sys=15.46%, ctx=56513, majf=0, minf=13 00:14:09.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:09.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:09.706 issued rwts: total=856267,855031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:09.706 00:14:09.706 Run status group 0 (all jobs): 00:14:09.706 READ: bw=55.7MiB/s (58.5MB/s), 55.7MiB/s-55.7MiB/s (58.5MB/s-58.5MB/s), io=3345MiB (3507MB), run=60002-60002msec 00:14:09.706 WRITE: bw=55.7MiB/s (58.4MB/s), 55.7MiB/s-55.7MiB/s (58.4MB/s-58.4MB/s), io=3340MiB (3502MB), run=60002-60002msec 00:14:09.706 00:14:09.706 Disk stats (read/write): 00:14:09.706 ublkb1: ios=853076/851996, merge=0/0, ticks=3381693/4109851, in_queue=7491544, util=99.89% 00:14:09.706 10:41:35 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:09.706 10:41:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.706 10:41:35 -- common/autotest_common.sh@10 -- # set +x 00:14:09.706 [2024-12-03 10:41:35.786570] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:09.706 [2024-12-03 10:41:35.822216] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:09.706 [2024-12-03 10:41:35.822360] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:09.706 [2024-12-03 10:41:35.829095] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:09.706 [2024-12-03 10:41:35.829198] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:09.706 [2024-12-03 10:41:35.829207] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:09.706 10:41:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.706 10:41:35 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:09.706 10:41:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.706 10:41:35 -- common/autotest_common.sh@10 -- # set +x 00:14:09.706 [2024-12-03 10:41:35.845142] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:09.706 [2024-12-03 10:41:35.853072] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:09.706 [2024-12-03 10:41:35.853102] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:09.706 10:41:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.706 10:41:35 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:09.706 10:41:35 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:09.706 10:41:35 -- ublk/ublk_recovery.sh@14 -- # killprocess 69774 00:14:09.706 10:41:35 -- common/autotest_common.sh@936 -- # '[' -z 69774 ']' 00:14:09.706 10:41:35 -- common/autotest_common.sh@940 -- # kill -0 69774 00:14:09.706 10:41:35 -- common/autotest_common.sh@941 -- # uname 00:14:09.706 10:41:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:09.706 10:41:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69774 00:14:09.706 killing process with pid 69774 00:14:09.706 10:41:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:09.706 10:41:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:09.706 10:41:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69774' 00:14:09.706 10:41:35 -- common/autotest_common.sh@955 -- # kill 69774 00:14:09.706 10:41:35 -- common/autotest_common.sh@960 -- # wait 69774 00:14:09.706 [2024-12-03 10:41:36.930578] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:09.706 [2024-12-03 10:41:36.930628] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:09.706 00:14:09.706 real 1m4.326s 00:14:09.706 user 1m47.514s 00:14:09.706 sys 0m21.523s 00:14:09.706 10:41:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:09.706 ************************************ 00:14:09.706 END TEST ublk_recovery 00:14:09.706 ************************************ 00:14:09.706 10:41:37 -- common/autotest_common.sh@10 -- # set +x 00:14:09.706 10:41:37 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:14:09.706 10:41:37 -- spdk/autotest.sh@255 -- # timing_exit lib 00:14:09.706 10:41:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:09.706 10:41:37 -- common/autotest_common.sh@10 -- # set +x 00:14:09.706 10:41:37 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:14:09.706 10:41:37 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:14:09.706 10:41:37 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:14:09.706 10:41:37 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:14:09.706 10:41:37 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:14:09.706 10:41:37 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:14:09.706 10:41:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:09.706 10:41:37 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:09.706 10:41:37 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:14:09.706 10:41:37 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:14:09.706 10:41:37 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:09.706 10:41:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:09.707 10:41:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:09.707 10:41:37 -- common/autotest_common.sh@10 -- # set +x 00:14:09.707 ************************************ 00:14:09.707 START TEST ftl 00:14:09.707 ************************************ 00:14:09.707 10:41:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:09.707 * Looking for test storage... 00:14:09.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:09.707 10:41:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:09.707 10:41:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:09.707 10:41:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:09.707 10:41:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:09.707 10:41:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:09.707 10:41:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:09.707 10:41:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:09.707 10:41:37 -- scripts/common.sh@335 -- # IFS=.-: 00:14:09.707 10:41:37 -- scripts/common.sh@335 -- # read -ra ver1 00:14:09.707 10:41:37 -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.707 10:41:37 -- scripts/common.sh@336 -- # read -ra ver2 00:14:09.707 10:41:37 -- scripts/common.sh@337 -- # local 'op=<' 00:14:09.707 10:41:37 -- scripts/common.sh@339 -- # ver1_l=2 00:14:09.707 10:41:37 -- scripts/common.sh@340 -- # ver2_l=1 00:14:09.707 10:41:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:09.707 10:41:37 -- scripts/common.sh@343 -- # case "$op" in 00:14:09.707 10:41:37 -- scripts/common.sh@344 -- # : 1 00:14:09.707 10:41:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:09.707 10:41:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.707 10:41:37 -- scripts/common.sh@364 -- # decimal 1 00:14:09.707 10:41:37 -- scripts/common.sh@352 -- # local d=1 00:14:09.707 10:41:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.707 10:41:37 -- scripts/common.sh@354 -- # echo 1 00:14:09.707 10:41:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:09.707 10:41:37 -- scripts/common.sh@365 -- # decimal 2 00:14:09.707 10:41:37 -- scripts/common.sh@352 -- # local d=2 00:14:09.707 10:41:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.707 10:41:37 -- scripts/common.sh@354 -- # echo 2 00:14:09.707 10:41:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:09.707 10:41:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:09.707 10:41:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:09.707 10:41:37 -- scripts/common.sh@367 -- # return 0 00:14:09.707 10:41:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.707 10:41:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:09.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.707 --rc genhtml_branch_coverage=1 00:14:09.707 --rc genhtml_function_coverage=1 00:14:09.707 --rc genhtml_legend=1 00:14:09.707 --rc geninfo_all_blocks=1 00:14:09.707 --rc geninfo_unexecuted_blocks=1 00:14:09.707 00:14:09.707 ' 00:14:09.707 10:41:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:09.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.707 --rc genhtml_branch_coverage=1 00:14:09.707 --rc genhtml_function_coverage=1 00:14:09.707 --rc genhtml_legend=1 00:14:09.707 --rc geninfo_all_blocks=1 00:14:09.707 --rc geninfo_unexecuted_blocks=1 00:14:09.707 00:14:09.707 ' 00:14:09.707 10:41:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:09.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.707 --rc genhtml_branch_coverage=1 00:14:09.707 --rc genhtml_function_coverage=1 00:14:09.707 --rc genhtml_legend=1 00:14:09.707 --rc geninfo_all_blocks=1 00:14:09.707 --rc geninfo_unexecuted_blocks=1 00:14:09.707 00:14:09.707 ' 00:14:09.707 10:41:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:09.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.707 --rc genhtml_branch_coverage=1 00:14:09.707 --rc genhtml_function_coverage=1 00:14:09.707 --rc genhtml_legend=1 00:14:09.707 --rc geninfo_all_blocks=1 00:14:09.707 --rc geninfo_unexecuted_blocks=1 00:14:09.707 00:14:09.707 ' 00:14:09.707 10:41:37 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:09.707 10:41:37 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:09.707 10:41:37 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:09.707 10:41:37 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:09.707 10:41:37 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:09.707 10:41:37 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:09.707 10:41:37 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:09.707 10:41:37 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:09.707 10:41:37 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:09.707 10:41:37 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:09.707 10:41:37 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:09.707 10:41:37 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:09.707 10:41:37 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:09.707 10:41:37 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:09.707 10:41:37 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:09.707 10:41:37 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:09.707 10:41:37 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:09.707 10:41:37 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:09.707 10:41:37 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:09.707 10:41:37 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:09.707 10:41:37 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:09.707 10:41:37 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:09.707 10:41:37 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:09.707 10:41:37 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:09.707 10:41:37 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:09.707 10:41:37 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:09.707 10:41:37 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:09.707 10:41:37 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:09.707 10:41:37 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:09.707 10:41:37 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:09.707 10:41:37 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:09.707 10:41:37 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:09.707 10:41:37 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:09.707 10:41:37 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:09.707 10:41:37 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:09.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:09.707 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:09.707 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:09.707 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:09.707 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:09.708 10:41:38 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=70585 00:14:09.708 10:41:38 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:09.708 10:41:38 -- ftl/ftl.sh@38 -- # waitforlisten 70585 00:14:09.708 10:41:38 -- common/autotest_common.sh@829 -- # '[' -z 70585 ']' 00:14:09.708 10:41:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.708 10:41:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:09.708 10:41:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.708 10:41:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:09.708 10:41:38 -- common/autotest_common.sh@10 -- # set +x 00:14:09.708 [2024-12-03 10:41:38.508800] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:09.708 [2024-12-03 10:41:38.509084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70585 ] 00:14:09.708 [2024-12-03 10:41:38.652637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.708 [2024-12-03 10:41:38.820915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:09.708 [2024-12-03 10:41:38.821101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.708 10:41:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.708 10:41:39 -- common/autotest_common.sh@862 -- # return 0 00:14:09.708 10:41:39 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:09.708 10:41:39 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:09.708 10:41:40 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:09.708 10:41:40 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:10.273 10:41:40 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:10.273 10:41:40 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:10.273 10:41:40 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:10.273 10:41:40 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:14:10.273 10:41:40 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:10.273 10:41:40 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:14:10.273 10:41:40 -- ftl/ftl.sh@50 -- # break 00:14:10.273 10:41:40 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:14:10.273 10:41:40 -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:10.273 10:41:40 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:10.273 10:41:40 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:10.531 10:41:40 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:14:10.531 10:41:40 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:10.531 10:41:40 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:14:10.531 10:41:40 -- ftl/ftl.sh@63 -- # break 00:14:10.531 10:41:40 -- ftl/ftl.sh@66 -- # killprocess 70585 00:14:10.531 10:41:40 -- common/autotest_common.sh@936 -- # '[' -z 70585 ']' 00:14:10.531 10:41:40 -- common/autotest_common.sh@940 -- # kill -0 70585 00:14:10.531 10:41:40 -- common/autotest_common.sh@941 -- # uname 00:14:10.531 10:41:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:10.531 10:41:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70585 00:14:10.531 killing process with pid 70585 00:14:10.531 10:41:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:10.531 10:41:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:10.531 10:41:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70585' 00:14:10.531 10:41:41 -- common/autotest_common.sh@955 -- # kill 70585 00:14:10.531 10:41:41 -- common/autotest_common.sh@960 -- # wait 70585 00:14:11.909 10:41:42 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:14:11.909 10:41:42 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:14:11.909 10:41:42 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:11.909 10:41:42 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:11.909 10:41:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:11.909 10:41:42 -- common/autotest_common.sh@10 -- # set +x 00:14:11.909 ************************************ 00:14:11.909 START TEST ftl_fio_basic 00:14:11.909 ************************************ 00:14:11.909 10:41:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:11.909 * Looking for test storage... 00:14:11.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:11.909 10:41:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:11.909 10:41:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:11.909 10:41:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:11.909 10:41:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:11.909 10:41:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:11.909 10:41:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:11.909 10:41:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:11.909 10:41:42 -- scripts/common.sh@335 -- # IFS=.-: 00:14:11.909 10:41:42 -- scripts/common.sh@335 -- # read -ra ver1 00:14:11.909 10:41:42 -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.909 10:41:42 -- scripts/common.sh@336 -- # read -ra ver2 00:14:11.909 10:41:42 -- scripts/common.sh@337 -- # local 'op=<' 00:14:11.909 10:41:42 -- scripts/common.sh@339 -- # ver1_l=2 00:14:11.909 10:41:42 -- scripts/common.sh@340 -- # ver2_l=1 00:14:11.909 10:41:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:11.909 10:41:42 -- scripts/common.sh@343 -- # case "$op" in 00:14:11.909 10:41:42 -- scripts/common.sh@344 -- # : 1 00:14:11.909 10:41:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:11.909 10:41:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.909 10:41:42 -- scripts/common.sh@364 -- # decimal 1 00:14:11.909 10:41:42 -- scripts/common.sh@352 -- # local d=1 00:14:11.909 10:41:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.909 10:41:42 -- scripts/common.sh@354 -- # echo 1 00:14:11.909 10:41:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:11.909 10:41:42 -- scripts/common.sh@365 -- # decimal 2 00:14:11.909 10:41:42 -- scripts/common.sh@352 -- # local d=2 00:14:11.909 10:41:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.909 10:41:42 -- scripts/common.sh@354 -- # echo 2 00:14:11.909 10:41:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:11.909 10:41:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:11.909 10:41:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:11.909 10:41:42 -- scripts/common.sh@367 -- # return 0 00:14:11.909 10:41:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.909 10:41:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:11.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.909 --rc genhtml_branch_coverage=1 00:14:11.909 --rc genhtml_function_coverage=1 00:14:11.909 --rc genhtml_legend=1 00:14:11.909 --rc geninfo_all_blocks=1 00:14:11.909 --rc geninfo_unexecuted_blocks=1 00:14:11.909 00:14:11.909 ' 00:14:11.909 10:41:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:11.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.909 --rc genhtml_branch_coverage=1 00:14:11.909 --rc genhtml_function_coverage=1 00:14:11.909 --rc genhtml_legend=1 00:14:11.909 --rc geninfo_all_blocks=1 00:14:11.909 --rc geninfo_unexecuted_blocks=1 00:14:11.909 00:14:11.909 ' 00:14:11.909 10:41:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:11.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.909 --rc genhtml_branch_coverage=1 00:14:11.909 --rc genhtml_function_coverage=1 00:14:11.909 --rc genhtml_legend=1 00:14:11.909 --rc geninfo_all_blocks=1 00:14:11.909 --rc geninfo_unexecuted_blocks=1 00:14:11.909 00:14:11.909 ' 00:14:11.909 10:41:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:11.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.909 --rc genhtml_branch_coverage=1 00:14:11.909 --rc genhtml_function_coverage=1 00:14:11.909 --rc genhtml_legend=1 00:14:11.909 --rc geninfo_all_blocks=1 00:14:11.909 --rc geninfo_unexecuted_blocks=1 00:14:11.909 00:14:11.909 ' 00:14:11.909 10:41:42 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:11.909 10:41:42 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:11.909 10:41:42 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:11.909 10:41:42 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:11.909 10:41:42 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:11.909 10:41:42 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:11.909 10:41:42 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:11.909 10:41:42 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:11.909 10:41:42 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:11.909 10:41:42 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:11.909 10:41:42 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:11.909 10:41:42 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:11.909 10:41:42 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:11.909 10:41:42 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:11.909 10:41:42 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:11.909 10:41:42 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:11.909 10:41:42 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:11.909 10:41:42 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:11.909 10:41:42 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:11.909 10:41:42 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:11.909 10:41:42 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:11.909 10:41:42 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:11.909 10:41:42 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:11.909 10:41:42 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:11.909 10:41:42 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:11.909 10:41:42 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:11.909 10:41:42 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:11.909 10:41:42 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:11.909 10:41:42 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:11.909 10:41:42 -- ftl/fio.sh@11 -- # declare -A suite 00:14:11.909 10:41:42 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:11.909 10:41:42 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:11.909 10:41:42 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:11.909 10:41:42 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:11.909 10:41:42 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:14:11.909 10:41:42 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:14:11.909 10:41:42 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:11.909 10:41:42 -- ftl/fio.sh@26 -- # uuid= 00:14:11.909 10:41:42 -- ftl/fio.sh@27 -- # timeout=240 00:14:11.909 10:41:42 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:11.909 10:41:42 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:11.910 10:41:42 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:11.910 10:41:42 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:11.910 10:41:42 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:11.910 10:41:42 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:11.910 10:41:42 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:11.910 10:41:42 -- ftl/fio.sh@45 -- # svcpid=70716 00:14:11.910 10:41:42 -- ftl/fio.sh@46 -- # waitforlisten 70716 00:14:11.910 10:41:42 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:11.910 10:41:42 -- common/autotest_common.sh@829 -- # '[' -z 70716 ']' 00:14:11.910 10:41:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.910 10:41:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.910 10:41:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.910 10:41:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.910 10:41:42 -- common/autotest_common.sh@10 -- # set +x 00:14:12.170 [2024-12-03 10:41:42.528355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:12.170 [2024-12-03 10:41:42.529390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70716 ] 00:14:12.170 [2024-12-03 10:41:42.687502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:12.429 [2024-12-03 10:41:42.853280] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:12.429 [2024-12-03 10:41:42.853979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.429 [2024-12-03 10:41:42.854289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.429 [2024-12-03 10:41:42.854270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.806 10:41:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.806 10:41:44 -- common/autotest_common.sh@862 -- # return 0 00:14:13.806 10:41:44 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:13.806 10:41:44 -- ftl/common.sh@54 -- # local name=nvme0 00:14:13.806 10:41:44 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:13.806 10:41:44 -- ftl/common.sh@56 -- # local size=103424 00:14:13.806 10:41:44 -- ftl/common.sh@59 -- # local base_bdev 00:14:13.806 10:41:44 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:13.806 10:41:44 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:13.806 10:41:44 -- ftl/common.sh@62 -- # local base_size 00:14:13.806 10:41:44 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:13.806 10:41:44 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:13.806 10:41:44 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:13.806 10:41:44 -- common/autotest_common.sh@1369 -- # local bs 00:14:13.806 10:41:44 -- common/autotest_common.sh@1370 -- # local nb 00:14:13.806 10:41:44 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:14.064 10:41:44 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:14.064 { 00:14:14.064 "name": "nvme0n1", 00:14:14.064 "aliases": [ 00:14:14.064 "5684ade9-ab3c-4f6a-b7d1-16e19c0ba09a" 00:14:14.064 ], 00:14:14.064 "product_name": "NVMe disk", 00:14:14.064 "block_size": 4096, 00:14:14.064 "num_blocks": 1310720, 00:14:14.064 "uuid": "5684ade9-ab3c-4f6a-b7d1-16e19c0ba09a", 00:14:14.064 "assigned_rate_limits": { 00:14:14.064 "rw_ios_per_sec": 0, 00:14:14.064 "rw_mbytes_per_sec": 0, 00:14:14.064 "r_mbytes_per_sec": 0, 00:14:14.064 "w_mbytes_per_sec": 0 00:14:14.064 }, 00:14:14.064 "claimed": false, 00:14:14.064 "zoned": false, 00:14:14.064 "supported_io_types": { 00:14:14.064 "read": true, 00:14:14.064 "write": true, 00:14:14.064 "unmap": true, 00:14:14.064 "write_zeroes": true, 00:14:14.064 "flush": true, 00:14:14.064 "reset": true, 00:14:14.064 "compare": true, 00:14:14.064 "compare_and_write": false, 00:14:14.064 "abort": true, 00:14:14.064 "nvme_admin": true, 00:14:14.064 "nvme_io": true 00:14:14.064 }, 00:14:14.064 "driver_specific": { 00:14:14.064 "nvme": [ 00:14:14.064 { 00:14:14.064 "pci_address": "0000:00:07.0", 00:14:14.064 "trid": { 00:14:14.064 "trtype": "PCIe", 00:14:14.065 "traddr": "0000:00:07.0" 00:14:14.065 }, 00:14:14.065 "ctrlr_data": { 00:14:14.065 "cntlid": 0, 00:14:14.065 "vendor_id": "0x1b36", 00:14:14.065 "model_number": "QEMU NVMe Ctrl", 00:14:14.065 "serial_number": "12341", 00:14:14.065 "firmware_revision": "8.0.0", 00:14:14.065 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:14.065 "oacs": { 00:14:14.065 "security": 0, 00:14:14.065 "format": 1, 00:14:14.065 "firmware": 0, 00:14:14.065 "ns_manage": 1 00:14:14.065 }, 00:14:14.065 "multi_ctrlr": false, 00:14:14.065 "ana_reporting": false 00:14:14.065 }, 00:14:14.065 "vs": { 00:14:14.065 "nvme_version": "1.4" 00:14:14.065 }, 00:14:14.065 "ns_data": { 00:14:14.065 "id": 1, 00:14:14.065 "can_share": false 00:14:14.065 } 00:14:14.065 } 00:14:14.065 ], 00:14:14.065 "mp_policy": "active_passive" 00:14:14.065 } 00:14:14.065 } 00:14:14.065 ]' 00:14:14.065 10:41:44 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:14.065 10:41:44 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:14.065 10:41:44 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:14.065 10:41:44 -- common/autotest_common.sh@1373 -- # nb=1310720 00:14:14.065 10:41:44 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:14:14.065 10:41:44 -- common/autotest_common.sh@1377 -- # echo 5120 00:14:14.065 10:41:44 -- ftl/common.sh@63 -- # base_size=5120 00:14:14.065 10:41:44 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:14.065 10:41:44 -- ftl/common.sh@67 -- # clear_lvols 00:14:14.065 10:41:44 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:14.065 10:41:44 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:14.323 10:41:44 -- ftl/common.sh@28 -- # stores= 00:14:14.323 10:41:44 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:14.323 10:41:44 -- ftl/common.sh@68 -- # lvs=3a0537fe-1267-4190-9664-774c96a8affb 00:14:14.323 10:41:44 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3a0537fe-1267-4190-9664-774c96a8affb 00:14:14.587 10:41:45 -- ftl/fio.sh@48 -- # split_bdev=7a01d776-db5e-4d4d-ae64-19435eb555b1 00:14:14.587 10:41:45 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 7a01d776-db5e-4d4d-ae64-19435eb555b1 00:14:14.587 10:41:45 -- ftl/common.sh@35 -- # local name=nvc0 00:14:14.587 10:41:45 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:14.587 10:41:45 -- ftl/common.sh@37 -- # local base_bdev=7a01d776-db5e-4d4d-ae64-19435eb555b1 00:14:14.587 10:41:45 -- ftl/common.sh@38 -- # local cache_size= 00:14:14.587 10:41:45 -- ftl/common.sh@41 -- # get_bdev_size 7a01d776-db5e-4d4d-ae64-19435eb555b1 00:14:14.587 10:41:45 -- common/autotest_common.sh@1367 -- # local bdev_name=7a01d776-db5e-4d4d-ae64-19435eb555b1 00:14:14.587 10:41:45 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:14.587 10:41:45 -- common/autotest_common.sh@1369 -- # local bs 00:14:14.587 10:41:45 -- common/autotest_common.sh@1370 -- # local nb 00:14:14.587 10:41:45 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a01d776-db5e-4d4d-ae64-19435eb555b1 00:14:14.847 10:41:45 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:14.847 { 00:14:14.847 "name": "7a01d776-db5e-4d4d-ae64-19435eb555b1", 00:14:14.847 "aliases": [ 00:14:14.847 "lvs/nvme0n1p0" 00:14:14.847 ], 00:14:14.847 "product_name": "Logical Volume", 00:14:14.847 "block_size": 4096, 00:14:14.847 "num_blocks": 26476544, 00:14:14.847 "uuid": "7a01d776-db5e-4d4d-ae64-19435eb555b1", 00:14:14.847 "assigned_rate_limits": { 00:14:14.847 "rw_ios_per_sec": 0, 00:14:14.847 "rw_mbytes_per_sec": 0, 00:14:14.847 "r_mbytes_per_sec": 0, 00:14:14.847 "w_mbytes_per_sec": 0 00:14:14.847 }, 00:14:14.847 "claimed": false, 00:14:14.847 "zoned": false, 00:14:14.847 "supported_io_types": { 00:14:14.847 "read": true, 00:14:14.847 "write": true, 00:14:14.847 "unmap": true, 00:14:14.847 "write_zeroes": true, 00:14:14.847 "flush": false, 00:14:14.847 "reset": true, 00:14:14.847 "compare": false, 00:14:14.847 "compare_and_write": false, 00:14:14.847 "abort": false, 00:14:14.847 "nvme_admin": false, 00:14:14.847 "nvme_io": false 00:14:14.847 }, 00:14:14.847 "driver_specific": { 00:14:14.847 "lvol": { 00:14:14.847 "lvol_store_uuid": "3a0537fe-1267-4190-9664-774c96a8affb", 00:14:14.847 "base_bdev": "nvme0n1", 00:14:14.847 "thin_provision": true, 00:14:14.847 "snapshot": false, 00:14:14.847 "clone": false, 00:14:14.847 "esnap_clone": false 00:14:14.847 } 00:14:14.847 } 00:14:14.847 } 00:14:14.847 ]' 00:14:14.847 10:41:45 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:14.847 10:41:45 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:14.847 10:41:45 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:14.847 10:41:45 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:14.847 10:41:45 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:14.847 10:41:45 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:14.847 10:41:45 -- ftl/common.sh@41 -- # local base_size=5171 00:14:14.847 10:41:45 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:14.847 10:41:45 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:15.104 10:41:45 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:15.104 10:41:45 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:15.104 10:41:45 -- ftl/common.sh@48 -- # get_bdev_size 7a01d776-db5e-4d4d-ae64-19435eb555b1 00:14:15.104 10:41:45 -- common/autotest_common.sh@1367 -- # local bdev_name=7a01d776-db5e-4d4d-ae64-19435eb555b1 00:14:15.104 10:41:45 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:15.104 10:41:45 -- common/autotest_common.sh@1369 -- # local bs 00:14:15.104 10:41:45 -- common/autotest_common.sh@1370 -- # local nb 00:14:15.104 10:41:45 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a01d776-db5e-4d4d-ae64-19435eb555b1 00:14:15.361 10:41:45 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:15.361 { 00:14:15.361 "name": "7a01d776-db5e-4d4d-ae64-19435eb555b1", 00:14:15.361 "aliases": [ 00:14:15.361 "lvs/nvme0n1p0" 00:14:15.361 ], 00:14:15.361 "product_name": "Logical Volume", 00:14:15.361 "block_size": 4096, 00:14:15.361 "num_blocks": 26476544, 00:14:15.361 "uuid": "7a01d776-db5e-4d4d-ae64-19435eb555b1", 00:14:15.361 "assigned_rate_limits": { 00:14:15.361 "rw_ios_per_sec": 0, 00:14:15.361 "rw_mbytes_per_sec": 0, 00:14:15.361 "r_mbytes_per_sec": 0, 00:14:15.361 "w_mbytes_per_sec": 0 00:14:15.361 }, 00:14:15.361 "claimed": false, 00:14:15.361 "zoned": false, 00:14:15.361 "supported_io_types": { 00:14:15.361 "read": true, 00:14:15.361 "write": true, 00:14:15.361 "unmap": true, 00:14:15.361 "write_zeroes": true, 00:14:15.361 "flush": false, 00:14:15.361 "reset": true, 00:14:15.361 "compare": false, 00:14:15.361 "compare_and_write": false, 00:14:15.361 "abort": false, 00:14:15.361 "nvme_admin": false, 00:14:15.361 "nvme_io": false 00:14:15.361 }, 00:14:15.361 "driver_specific": { 00:14:15.361 "lvol": { 00:14:15.361 "lvol_store_uuid": "3a0537fe-1267-4190-9664-774c96a8affb", 00:14:15.361 "base_bdev": "nvme0n1", 00:14:15.361 "thin_provision": true, 00:14:15.361 "snapshot": false, 00:14:15.361 "clone": false, 00:14:15.361 "esnap_clone": false 00:14:15.361 } 00:14:15.361 } 00:14:15.361 } 00:14:15.361 ]' 00:14:15.361 10:41:45 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:15.362 10:41:45 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:15.362 10:41:45 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:15.362 10:41:45 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:15.362 10:41:45 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:15.362 10:41:45 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:15.362 10:41:45 -- ftl/common.sh@48 -- # cache_size=5171 00:14:15.362 10:41:45 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:15.620 10:41:45 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:15.620 10:41:45 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:15.620 10:41:45 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:15.620 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:15.620 10:41:46 -- ftl/fio.sh@56 -- # get_bdev_size 7a01d776-db5e-4d4d-ae64-19435eb555b1 00:14:15.620 10:41:46 -- common/autotest_common.sh@1367 -- # local bdev_name=7a01d776-db5e-4d4d-ae64-19435eb555b1 00:14:15.620 10:41:46 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:15.620 10:41:46 -- common/autotest_common.sh@1369 -- # local bs 00:14:15.620 10:41:46 -- common/autotest_common.sh@1370 -- # local nb 00:14:15.620 10:41:46 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a01d776-db5e-4d4d-ae64-19435eb555b1 00:14:15.620 10:41:46 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:15.620 { 00:14:15.620 "name": "7a01d776-db5e-4d4d-ae64-19435eb555b1", 00:14:15.620 "aliases": [ 00:14:15.620 "lvs/nvme0n1p0" 00:14:15.620 ], 00:14:15.620 "product_name": "Logical Volume", 00:14:15.620 "block_size": 4096, 00:14:15.620 "num_blocks": 26476544, 00:14:15.620 "uuid": "7a01d776-db5e-4d4d-ae64-19435eb555b1", 00:14:15.620 "assigned_rate_limits": { 00:14:15.620 "rw_ios_per_sec": 0, 00:14:15.620 "rw_mbytes_per_sec": 0, 00:14:15.620 "r_mbytes_per_sec": 0, 00:14:15.620 "w_mbytes_per_sec": 0 00:14:15.620 }, 00:14:15.620 "claimed": false, 00:14:15.620 "zoned": false, 00:14:15.620 "supported_io_types": { 00:14:15.620 "read": true, 00:14:15.620 "write": true, 00:14:15.620 "unmap": true, 00:14:15.620 "write_zeroes": true, 00:14:15.620 "flush": false, 00:14:15.620 "reset": true, 00:14:15.620 "compare": false, 00:14:15.620 "compare_and_write": false, 00:14:15.620 "abort": false, 00:14:15.620 "nvme_admin": false, 00:14:15.620 "nvme_io": false 00:14:15.620 }, 00:14:15.620 "driver_specific": { 00:14:15.620 "lvol": { 00:14:15.620 "lvol_store_uuid": "3a0537fe-1267-4190-9664-774c96a8affb", 00:14:15.620 "base_bdev": "nvme0n1", 00:14:15.620 "thin_provision": true, 00:14:15.620 "snapshot": false, 00:14:15.620 "clone": false, 00:14:15.620 "esnap_clone": false 00:14:15.620 } 00:14:15.620 } 00:14:15.620 } 00:14:15.620 ]' 00:14:15.620 10:41:46 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:15.620 10:41:46 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:15.620 10:41:46 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:15.879 10:41:46 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:15.879 10:41:46 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:15.879 10:41:46 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:15.879 10:41:46 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:15.879 10:41:46 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:15.879 10:41:46 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7a01d776-db5e-4d4d-ae64-19435eb555b1 -c nvc0n1p0 --l2p_dram_limit 60 00:14:15.880 [2024-12-03 10:41:46.437478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:15.880 [2024-12-03 10:41:46.437517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:15.880 [2024-12-03 10:41:46.437531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:14:15.880 [2024-12-03 10:41:46.437538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:15.880 [2024-12-03 10:41:46.437590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:15.880 [2024-12-03 10:41:46.437599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:15.880 [2024-12-03 10:41:46.437609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:14:15.880 [2024-12-03 10:41:46.437616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:15.880 [2024-12-03 10:41:46.437640] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:15.880 [2024-12-03 10:41:46.438199] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:15.880 [2024-12-03 10:41:46.438221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:15.880 [2024-12-03 10:41:46.438229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:15.880 [2024-12-03 10:41:46.438238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:14:15.880 [2024-12-03 10:41:46.438245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:15.880 [2024-12-03 10:41:46.438278] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7f307c75-36be-4cb6-88c7-03c27b15ac79 00:14:15.880 [2024-12-03 10:41:46.439547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:15.880 [2024-12-03 10:41:46.439688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:15.880 [2024-12-03 10:41:46.439704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:14:15.880 [2024-12-03 10:41:46.439714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:15.880 [2024-12-03 10:41:46.446587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:15.880 [2024-12-03 10:41:46.446698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:15.880 [2024-12-03 10:41:46.446711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.783 ms 00:14:15.880 [2024-12-03 10:41:46.446719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:15.880 [2024-12-03 10:41:46.446787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:15.880 [2024-12-03 10:41:46.446800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:15.880 [2024-12-03 10:41:46.446809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:14:15.880 [2024-12-03 10:41:46.446818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:15.880 [2024-12-03 10:41:46.446866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:15.880 [2024-12-03 10:41:46.446875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:15.880 [2024-12-03 10:41:46.446882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:14:15.880 [2024-12-03 10:41:46.446892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:15.880 [2024-12-03 10:41:46.446916] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:15.880 [2024-12-03 10:41:46.450235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:15.880 [2024-12-03 10:41:46.450259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:15.880 [2024-12-03 10:41:46.450270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.323 ms 00:14:15.880 [2024-12-03 10:41:46.450277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:15.880 [2024-12-03 10:41:46.450315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:15.880 [2024-12-03 10:41:46.450323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:15.880 [2024-12-03 10:41:46.450332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:14:15.880 [2024-12-03 10:41:46.450339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:15.880 [2024-12-03 10:41:46.450363] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:15.880 [2024-12-03 10:41:46.450456] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:15.880 [2024-12-03 10:41:46.450475] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:15.880 [2024-12-03 10:41:46.450485] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:15.880 [2024-12-03 10:41:46.450495] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:15.880 [2024-12-03 10:41:46.450504] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:15.880 [2024-12-03 10:41:46.450513] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:15.880 [2024-12-03 10:41:46.450519] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:15.880 [2024-12-03 10:41:46.450530] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:15.880 [2024-12-03 10:41:46.450536] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:15.880 [2024-12-03 10:41:46.450545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:15.880 [2024-12-03 10:41:46.450551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:15.880 [2024-12-03 10:41:46.450560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:14:15.880 [2024-12-03 10:41:46.450567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:15.880 [2024-12-03 10:41:46.450624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:15.880 [2024-12-03 10:41:46.450631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:15.880 [2024-12-03 10:41:46.450640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:14:15.880 [2024-12-03 10:41:46.450646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:15.880 [2024-12-03 10:41:46.450717] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:15.880 [2024-12-03 10:41:46.450725] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:15.880 [2024-12-03 10:41:46.450735] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:15.880 [2024-12-03 10:41:46.450742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:15.880 [2024-12-03 10:41:46.450750] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:15.880 [2024-12-03 10:41:46.450756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:15.880 [2024-12-03 10:41:46.450765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:15.880 [2024-12-03 10:41:46.450772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:15.880 [2024-12-03 10:41:46.450780] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:15.880 [2024-12-03 10:41:46.450786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:15.880 [2024-12-03 10:41:46.450793] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:15.880 [2024-12-03 10:41:46.450801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:15.880 [2024-12-03 10:41:46.450809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:15.880 [2024-12-03 10:41:46.450815] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:15.880 [2024-12-03 10:41:46.450823] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:15.880 [2024-12-03 10:41:46.450829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:15.880 [2024-12-03 10:41:46.450837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:15.880 [2024-12-03 10:41:46.450844] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:15.880 [2024-12-03 10:41:46.450851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:15.880 [2024-12-03 10:41:46.450857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:15.880 [2024-12-03 10:41:46.450865] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:15.880 [2024-12-03 10:41:46.450871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:15.880 [2024-12-03 10:41:46.450878] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:15.880 [2024-12-03 10:41:46.450885] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:15.880 [2024-12-03 10:41:46.450892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:15.880 [2024-12-03 10:41:46.450898] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:15.880 [2024-12-03 10:41:46.450906] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:15.880 [2024-12-03 10:41:46.450911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:15.880 [2024-12-03 10:41:46.450918] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:15.880 [2024-12-03 10:41:46.450924] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:15.880 [2024-12-03 10:41:46.450931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:15.880 [2024-12-03 10:41:46.450937] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:15.880 [2024-12-03 10:41:46.450946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:15.880 [2024-12-03 10:41:46.450963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:15.880 [2024-12-03 10:41:46.450971] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:15.880 [2024-12-03 10:41:46.450977] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:15.881 [2024-12-03 10:41:46.450985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:15.881 [2024-12-03 10:41:46.450991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:15.881 [2024-12-03 10:41:46.450999] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:15.881 [2024-12-03 10:41:46.451005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:15.881 [2024-12-03 10:41:46.451013] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:15.881 [2024-12-03 10:41:46.451023] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:15.881 [2024-12-03 10:41:46.451031] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:15.881 [2024-12-03 10:41:46.451038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:15.881 [2024-12-03 10:41:46.451046] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:15.881 [2024-12-03 10:41:46.451063] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:15.881 [2024-12-03 10:41:46.451071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:15.881 [2024-12-03 10:41:46.451078] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:15.881 [2024-12-03 10:41:46.451087] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:15.881 [2024-12-03 10:41:46.451094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:15.881 [2024-12-03 10:41:46.451103] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:15.881 [2024-12-03 10:41:46.451111] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:15.881 [2024-12-03 10:41:46.451122] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:15.881 [2024-12-03 10:41:46.451129] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:15.881 [2024-12-03 10:41:46.451137] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:15.881 [2024-12-03 10:41:46.451143] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:15.881 [2024-12-03 10:41:46.451151] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:15.881 [2024-12-03 10:41:46.451157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:15.881 [2024-12-03 10:41:46.451165] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:15.881 [2024-12-03 10:41:46.451171] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:15.881 [2024-12-03 10:41:46.451179] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:15.881 [2024-12-03 10:41:46.451186] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:15.881 [2024-12-03 10:41:46.451195] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:15.881 [2024-12-03 10:41:46.451202] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:15.881 [2024-12-03 10:41:46.451212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:15.881 [2024-12-03 10:41:46.451219] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:15.881 [2024-12-03 10:41:46.451227] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:15.881 [2024-12-03 10:41:46.451237] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:15.881 [2024-12-03 10:41:46.451245] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:15.881 [2024-12-03 10:41:46.451251] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:15.881 [2024-12-03 10:41:46.451259] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:15.881 [2024-12-03 10:41:46.451266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:15.881 [2024-12-03 10:41:46.451274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:15.881 [2024-12-03 10:41:46.451283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:14:15.881 [2024-12-03 10:41:46.451291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:15.881 [2024-12-03 10:41:46.465239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:15.881 [2024-12-03 10:41:46.465365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:15.881 [2024-12-03 10:41:46.465381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.887 ms 00:14:15.881 [2024-12-03 10:41:46.465391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:15.881 [2024-12-03 10:41:46.465471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:15.881 [2024-12-03 10:41:46.465487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:15.881 [2024-12-03 10:41:46.465496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:14:15.881 [2024-12-03 10:41:46.465505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:16.141 [2024-12-03 10:41:46.493735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:16.141 [2024-12-03 10:41:46.493765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:16.141 [2024-12-03 10:41:46.493774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.187 ms 00:14:16.141 [2024-12-03 10:41:46.493784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:16.141 [2024-12-03 10:41:46.493817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:16.141 [2024-12-03 10:41:46.493827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:16.141 [2024-12-03 10:41:46.493835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:16.141 [2024-12-03 10:41:46.493843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:16.141 [2024-12-03 10:41:46.494275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:16.141 [2024-12-03 10:41:46.494301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:16.141 [2024-12-03 10:41:46.494310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:14:16.141 [2024-12-03 10:41:46.494318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:16.141 [2024-12-03 10:41:46.494423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:16.141 [2024-12-03 10:41:46.494438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:16.141 [2024-12-03 10:41:46.494446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:14:16.141 [2024-12-03 10:41:46.494453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:16.141 [2024-12-03 10:41:46.521263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:16.141 [2024-12-03 10:41:46.521313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:16.141 [2024-12-03 10:41:46.521327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.782 ms 00:14:16.141 [2024-12-03 10:41:46.521339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:16.141 [2024-12-03 10:41:46.533726] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:16.141 [2024-12-03 10:41:46.549344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:16.141 [2024-12-03 10:41:46.549495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:16.141 [2024-12-03 10:41:46.549513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.891 ms 00:14:16.141 [2024-12-03 10:41:46.549521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:16.141 [2024-12-03 10:41:46.603293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:16.141 [2024-12-03 10:41:46.603325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:16.141 [2024-12-03 10:41:46.603337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.736 ms 00:14:16.141 [2024-12-03 10:41:46.603345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:16.141 [2024-12-03 10:41:46.603386] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:16.141 [2024-12-03 10:41:46.603398] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:19.441 [2024-12-03 10:41:49.670833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:19.441 [2024-12-03 10:41:49.670899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:19.441 [2024-12-03 10:41:49.670919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3067.438 ms 00:14:19.441 [2024-12-03 10:41:49.670930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:19.441 [2024-12-03 10:41:49.671164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:19.441 [2024-12-03 10:41:49.671177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:19.441 [2024-12-03 10:41:49.671190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:14:19.441 [2024-12-03 10:41:49.671199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:19.441 [2024-12-03 10:41:49.694473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:19.441 [2024-12-03 10:41:49.694507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:19.441 [2024-12-03 10:41:49.694522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.221 ms 00:14:19.441 [2024-12-03 10:41:49.694532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:19.441 [2024-12-03 10:41:49.716948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:19.441 [2024-12-03 10:41:49.717130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:19.442 [2024-12-03 10:41:49.717156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.375 ms 00:14:19.442 [2024-12-03 10:41:49.717164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:19.442 [2024-12-03 10:41:49.717553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:19.442 [2024-12-03 10:41:49.717573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:19.442 [2024-12-03 10:41:49.717586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:14:19.442 [2024-12-03 10:41:49.717594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:19.442 [2024-12-03 10:41:49.784459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:19.442 [2024-12-03 10:41:49.784576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:19.442 [2024-12-03 10:41:49.784644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.818 ms 00:14:19.442 [2024-12-03 10:41:49.784674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:19.442 [2024-12-03 10:41:49.810433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:19.442 [2024-12-03 10:41:49.810546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:19.442 [2024-12-03 10:41:49.810617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.706 ms 00:14:19.442 [2024-12-03 10:41:49.810648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:19.442 [2024-12-03 10:41:49.814961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:19.442 [2024-12-03 10:41:49.815072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:19.442 [2024-12-03 10:41:49.815139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.261 ms 00:14:19.442 [2024-12-03 10:41:49.815169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:19.442 [2024-12-03 10:41:49.839008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:19.442 [2024-12-03 10:41:49.839136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:19.442 [2024-12-03 10:41:49.839202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.777 ms 00:14:19.442 [2024-12-03 10:41:49.839231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:19.442 [2024-12-03 10:41:49.839300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:19.442 [2024-12-03 10:41:49.839434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:19.442 [2024-12-03 10:41:49.839467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:14:19.442 [2024-12-03 10:41:49.839493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:19.442 [2024-12-03 10:41:49.839604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:19.442 [2024-12-03 10:41:49.839668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:19.442 [2024-12-03 10:41:49.839713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:14:19.442 [2024-12-03 10:41:49.839740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:19.442 [2024-12-03 10:41:49.840815] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3402.874 ms, result 0 00:14:19.442 { 00:14:19.442 "name": "ftl0", 00:14:19.442 "uuid": "7f307c75-36be-4cb6-88c7-03c27b15ac79" 00:14:19.442 } 00:14:19.442 10:41:49 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:19.442 10:41:49 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:14:19.442 10:41:49 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:19.442 10:41:49 -- common/autotest_common.sh@899 -- # local i 00:14:19.442 10:41:49 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:19.442 10:41:49 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:19.442 10:41:49 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:19.700 10:41:50 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:19.700 [ 00:14:19.700 { 00:14:19.700 "name": "ftl0", 00:14:19.700 "aliases": [ 00:14:19.700 "7f307c75-36be-4cb6-88c7-03c27b15ac79" 00:14:19.700 ], 00:14:19.700 "product_name": "FTL disk", 00:14:19.700 "block_size": 4096, 00:14:19.700 "num_blocks": 20971520, 00:14:19.700 "uuid": "7f307c75-36be-4cb6-88c7-03c27b15ac79", 00:14:19.700 "assigned_rate_limits": { 00:14:19.700 "rw_ios_per_sec": 0, 00:14:19.700 "rw_mbytes_per_sec": 0, 00:14:19.700 "r_mbytes_per_sec": 0, 00:14:19.700 "w_mbytes_per_sec": 0 00:14:19.700 }, 00:14:19.700 "claimed": false, 00:14:19.700 "zoned": false, 00:14:19.700 "supported_io_types": { 00:14:19.700 "read": true, 00:14:19.700 "write": true, 00:14:19.700 "unmap": true, 00:14:19.700 "write_zeroes": true, 00:14:19.700 "flush": true, 00:14:19.700 "reset": false, 00:14:19.700 "compare": false, 00:14:19.700 "compare_and_write": false, 00:14:19.700 "abort": false, 00:14:19.700 "nvme_admin": false, 00:14:19.700 "nvme_io": false 00:14:19.700 }, 00:14:19.700 "driver_specific": { 00:14:19.700 "ftl": { 00:14:19.700 "base_bdev": "7a01d776-db5e-4d4d-ae64-19435eb555b1", 00:14:19.700 "cache": "nvc0n1p0" 00:14:19.700 } 00:14:19.700 } 00:14:19.700 } 00:14:19.700 ] 00:14:19.700 10:41:50 -- common/autotest_common.sh@905 -- # return 0 00:14:19.700 10:41:50 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:19.700 10:41:50 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:19.958 10:41:50 -- ftl/fio.sh@70 -- # echo ']}' 00:14:19.958 10:41:50 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:20.217 [2024-12-03 10:41:50.596956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.217 [2024-12-03 10:41:50.597094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:20.217 [2024-12-03 10:41:50.597145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:20.217 [2024-12-03 10:41:50.597166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.217 [2024-12-03 10:41:50.597204] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:20.217 [2024-12-03 10:41:50.599408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.217 [2024-12-03 10:41:50.599492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:20.217 [2024-12-03 10:41:50.599563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:14:20.217 [2024-12-03 10:41:50.599584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.217 [2024-12-03 10:41:50.599998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.217 [2024-12-03 10:41:50.600069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:20.217 [2024-12-03 10:41:50.600126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:14:20.217 [2024-12-03 10:41:50.600150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.217 [2024-12-03 10:41:50.602596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.217 [2024-12-03 10:41:50.602655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:20.217 [2024-12-03 10:41:50.602669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.416 ms 00:14:20.217 [2024-12-03 10:41:50.602676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.217 [2024-12-03 10:41:50.607361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.217 [2024-12-03 10:41:50.607383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:14:20.217 [2024-12-03 10:41:50.607393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.659 ms 00:14:20.217 [2024-12-03 10:41:50.607401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.217 [2024-12-03 10:41:50.625345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.217 [2024-12-03 10:41:50.625437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:20.217 [2024-12-03 10:41:50.625453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.870 ms 00:14:20.217 [2024-12-03 10:41:50.625459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.218 [2024-12-03 10:41:50.638249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.218 [2024-12-03 10:41:50.638274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:20.218 [2024-12-03 10:41:50.638297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.755 ms 00:14:20.218 [2024-12-03 10:41:50.638304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.218 [2024-12-03 10:41:50.638451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.218 [2024-12-03 10:41:50.638464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:20.218 [2024-12-03 10:41:50.638476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:14:20.218 [2024-12-03 10:41:50.638483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.218 [2024-12-03 10:41:50.656692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.218 [2024-12-03 10:41:50.656786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:14:20.218 [2024-12-03 10:41:50.656801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.184 ms 00:14:20.218 [2024-12-03 10:41:50.656807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.218 [2024-12-03 10:41:50.674508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.218 [2024-12-03 10:41:50.674531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:14:20.218 [2024-12-03 10:41:50.674542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.669 ms 00:14:20.218 [2024-12-03 10:41:50.674548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.218 [2024-12-03 10:41:50.691638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.218 [2024-12-03 10:41:50.691733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:20.218 [2024-12-03 10:41:50.691749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.054 ms 00:14:20.218 [2024-12-03 10:41:50.691755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.218 [2024-12-03 10:41:50.709132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.218 [2024-12-03 10:41:50.709156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:20.218 [2024-12-03 10:41:50.709166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.307 ms 00:14:20.218 [2024-12-03 10:41:50.709173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.218 [2024-12-03 10:41:50.709207] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:20.218 [2024-12-03 10:41:50.709221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:20.218 [2024-12-03 10:41:50.709738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.709988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:20.219 [2024-12-03 10:41:50.710000] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:20.219 [2024-12-03 10:41:50.710009] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7f307c75-36be-4cb6-88c7-03c27b15ac79 00:14:20.219 [2024-12-03 10:41:50.710016] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:20.219 [2024-12-03 10:41:50.710024] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:20.219 [2024-12-03 10:41:50.710030] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:20.219 [2024-12-03 10:41:50.710038] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:20.219 [2024-12-03 10:41:50.710045] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:20.219 [2024-12-03 10:41:50.710063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:20.219 [2024-12-03 10:41:50.710070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:20.219 [2024-12-03 10:41:50.710078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:20.219 [2024-12-03 10:41:50.710083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:20.219 [2024-12-03 10:41:50.710093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.219 [2024-12-03 10:41:50.710104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:20.219 [2024-12-03 10:41:50.710112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:14:20.219 [2024-12-03 10:41:50.710120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.219 [2024-12-03 10:41:50.720083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.219 [2024-12-03 10:41:50.720108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:20.219 [2024-12-03 10:41:50.720118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.930 ms 00:14:20.219 [2024-12-03 10:41:50.720125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.219 [2024-12-03 10:41:50.720288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:20.219 [2024-12-03 10:41:50.720301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:20.219 [2024-12-03 10:41:50.720310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:14:20.219 [2024-12-03 10:41:50.720317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.219 [2024-12-03 10:41:50.757093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:20.219 [2024-12-03 10:41:50.757121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:20.219 [2024-12-03 10:41:50.757133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:20.219 [2024-12-03 10:41:50.757141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.219 [2024-12-03 10:41:50.757190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:20.219 [2024-12-03 10:41:50.757199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:20.219 [2024-12-03 10:41:50.757207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:20.219 [2024-12-03 10:41:50.757214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.219 [2024-12-03 10:41:50.757276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:20.219 [2024-12-03 10:41:50.757287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:20.219 [2024-12-03 10:41:50.757297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:20.219 [2024-12-03 10:41:50.757303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.219 [2024-12-03 10:41:50.757324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:20.219 [2024-12-03 10:41:50.757333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:20.219 [2024-12-03 10:41:50.757342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:20.219 [2024-12-03 10:41:50.757349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.478 [2024-12-03 10:41:50.828149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:20.478 [2024-12-03 10:41:50.828183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:20.478 [2024-12-03 10:41:50.828197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:20.478 [2024-12-03 10:41:50.828205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.478 [2024-12-03 10:41:50.852468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:20.478 [2024-12-03 10:41:50.852495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:20.478 [2024-12-03 10:41:50.852506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:20.478 [2024-12-03 10:41:50.852514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.478 [2024-12-03 10:41:50.852573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:20.478 [2024-12-03 10:41:50.852582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:20.478 [2024-12-03 10:41:50.852592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:20.478 [2024-12-03 10:41:50.852599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.478 [2024-12-03 10:41:50.852648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:20.478 [2024-12-03 10:41:50.852657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:20.478 [2024-12-03 10:41:50.852668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:20.478 [2024-12-03 10:41:50.852675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.478 [2024-12-03 10:41:50.852759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:20.478 [2024-12-03 10:41:50.852771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:20.478 [2024-12-03 10:41:50.852780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:20.478 [2024-12-03 10:41:50.852787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.478 [2024-12-03 10:41:50.852829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:20.478 [2024-12-03 10:41:50.852837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:20.478 [2024-12-03 10:41:50.852846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:20.478 [2024-12-03 10:41:50.852855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.478 [2024-12-03 10:41:50.852894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:20.478 [2024-12-03 10:41:50.852903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:20.478 [2024-12-03 10:41:50.852912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:20.478 [2024-12-03 10:41:50.852919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.478 [2024-12-03 10:41:50.852967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:20.478 [2024-12-03 10:41:50.852975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:20.478 [2024-12-03 10:41:50.852986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:20.478 [2024-12-03 10:41:50.852993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:20.478 [2024-12-03 10:41:50.853146] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 256.152 ms, result 0 00:14:20.478 true 00:14:20.478 10:41:50 -- ftl/fio.sh@75 -- # killprocess 70716 00:14:20.478 10:41:50 -- common/autotest_common.sh@936 -- # '[' -z 70716 ']' 00:14:20.478 10:41:50 -- common/autotest_common.sh@940 -- # kill -0 70716 00:14:20.478 10:41:50 -- common/autotest_common.sh@941 -- # uname 00:14:20.478 10:41:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:20.478 10:41:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70716 00:14:20.478 10:41:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:20.478 killing process with pid 70716 00:14:20.478 10:41:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:20.478 10:41:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70716' 00:14:20.478 10:41:50 -- common/autotest_common.sh@955 -- # kill 70716 00:14:20.479 10:41:50 -- common/autotest_common.sh@960 -- # wait 70716 00:14:25.765 10:41:56 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:25.765 10:41:56 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:25.765 10:41:56 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:25.765 10:41:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:25.765 10:41:56 -- common/autotest_common.sh@10 -- # set +x 00:14:25.765 10:41:56 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:25.765 10:41:56 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:25.765 10:41:56 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:25.765 10:41:56 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:25.765 10:41:56 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:25.765 10:41:56 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:25.765 10:41:56 -- common/autotest_common.sh@1330 -- # shift 00:14:25.765 10:41:56 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:25.765 10:41:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:25.765 10:41:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:25.765 10:41:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:25.765 10:41:56 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:25.765 10:41:56 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:25.765 10:41:56 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:25.765 10:41:56 -- common/autotest_common.sh@1336 -- # break 00:14:25.765 10:41:56 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:25.765 10:41:56 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:26.024 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:26.024 fio-3.35 00:14:26.024 Starting 1 thread 00:14:31.392 00:14:31.392 test: (groupid=0, jobs=1): err= 0: pid=70926: Tue Dec 3 10:42:01 2024 00:14:31.392 read: IOPS=948, BW=63.0MiB/s (66.1MB/s)(255MiB/4039msec) 00:14:31.392 slat (nsec): min=4140, max=78491, avg=6728.12, stdev=3318.36 00:14:31.392 clat (usec): min=257, max=1460, avg=472.70, stdev=202.89 00:14:31.392 lat (usec): min=261, max=1480, avg=479.43, stdev=204.77 00:14:31.392 clat percentiles (usec): 00:14:31.392 | 1.00th=[ 281], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 318], 00:14:31.392 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 383], 60.00th=[ 441], 00:14:31.392 | 70.00th=[ 519], 80.00th=[ 611], 90.00th=[ 848], 95.00th=[ 914], 00:14:31.392 | 99.00th=[ 988], 99.50th=[ 1020], 99.90th=[ 1188], 99.95th=[ 1352], 00:14:31.392 | 99.99th=[ 1467] 00:14:31.392 write: IOPS=955, BW=63.4MiB/s (66.5MB/s)(256MiB/4036msec); 0 zone resets 00:14:31.392 slat (nsec): min=14579, max=72124, avg=20557.09, stdev=5466.74 00:14:31.392 clat (usec): min=292, max=4601, avg=537.03, stdev=265.38 00:14:31.392 lat (usec): min=311, max=4639, avg=557.58, stdev=268.38 00:14:31.392 clat percentiles (usec): 00:14:31.392 | 1.00th=[ 306], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 343], 00:14:31.392 | 30.00th=[ 351], 40.00th=[ 363], 50.00th=[ 453], 60.00th=[ 486], 00:14:31.392 | 70.00th=[ 603], 80.00th=[ 685], 90.00th=[ 955], 95.00th=[ 1012], 00:14:31.393 | 99.00th=[ 1532], 99.50th=[ 1614], 99.90th=[ 1893], 99.95th=[ 1942], 00:14:31.393 | 99.99th=[ 4621] 00:14:31.393 bw ( KiB/s): min=35224, max=87040, per=99.83%, avg=64855.00, stdev=20775.08, samples=8 00:14:31.393 iops : min= 518, max= 1280, avg=953.75, stdev=305.52, samples=8 00:14:31.393 lat (usec) : 500=65.01%, 750=19.00%, 1000=13.01% 00:14:31.393 lat (msec) : 2=2.97%, 10=0.01% 00:14:31.393 cpu : usr=99.28%, sys=0.10%, ctx=29, majf=0, minf=1318 00:14:31.393 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:31.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.393 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.393 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:31.393 00:14:31.393 Run status group 0 (all jobs): 00:14:31.393 READ: bw=63.0MiB/s (66.1MB/s), 63.0MiB/s-63.0MiB/s (66.1MB/s-66.1MB/s), io=255MiB (267MB), run=4039-4039msec 00:14:31.393 WRITE: bw=63.4MiB/s (66.5MB/s), 63.4MiB/s-63.4MiB/s (66.5MB/s-66.5MB/s), io=256MiB (269MB), run=4036-4036msec 00:14:32.332 ----------------------------------------------------- 00:14:32.332 Suppressions used: 00:14:32.332 count bytes template 00:14:32.332 1 5 /usr/src/fio/parse.c 00:14:32.332 1 8 libtcmalloc_minimal.so 00:14:32.332 1 904 libcrypto.so 00:14:32.332 ----------------------------------------------------- 00:14:32.332 00:14:32.593 10:42:02 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:14:32.593 10:42:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.593 10:42:02 -- common/autotest_common.sh@10 -- # set +x 00:14:32.593 10:42:03 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:32.593 10:42:03 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:14:32.593 10:42:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.593 10:42:03 -- common/autotest_common.sh@10 -- # set +x 00:14:32.593 10:42:03 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:32.593 10:42:03 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:32.593 10:42:03 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:32.593 10:42:03 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:32.593 10:42:03 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:32.593 10:42:03 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:32.593 10:42:03 -- common/autotest_common.sh@1330 -- # shift 00:14:32.593 10:42:03 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:32.593 10:42:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:32.593 10:42:03 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:32.593 10:42:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:32.593 10:42:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:32.593 10:42:03 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:32.593 10:42:03 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:32.593 10:42:03 -- common/autotest_common.sh@1336 -- # break 00:14:32.593 10:42:03 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:32.593 10:42:03 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:32.852 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:32.852 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:32.852 fio-3.35 00:14:32.852 Starting 2 threads 00:14:59.402 00:14:59.402 first_half: (groupid=0, jobs=1): err= 0: pid=71031: Tue Dec 3 10:42:27 2024 00:14:59.402 read: IOPS=2823, BW=11.0MiB/s (11.6MB/s)(255MiB/23092msec) 00:14:59.402 slat (nsec): min=3030, max=33594, avg=5459.72, stdev=1626.38 00:14:59.402 clat (usec): min=625, max=434644, avg=34796.20, stdev=20792.01 00:14:59.402 lat (usec): min=631, max=434655, avg=34801.66, stdev=20792.11 00:14:59.402 clat percentiles (msec): 00:14:59.402 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:14:59.402 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:14:59.402 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 51], 00:14:59.402 | 99.00th=[ 142], 99.50th=[ 174], 99.90th=[ 239], 99.95th=[ 347], 00:14:59.402 | 99.99th=[ 422] 00:14:59.402 write: IOPS=3657, BW=14.3MiB/s (15.0MB/s)(256MiB/17918msec); 0 zone resets 00:14:59.402 slat (usec): min=3, max=3352, avg= 7.39, stdev=26.70 00:14:59.402 clat (usec): min=356, max=106372, avg=10459.91, stdev=16590.75 00:14:59.402 lat (usec): min=366, max=106381, avg=10467.31, stdev=16591.15 00:14:59.402 clat percentiles (usec): 00:14:59.402 | 1.00th=[ 685], 5.00th=[ 791], 10.00th=[ 947], 20.00th=[ 1237], 00:14:59.402 | 30.00th=[ 2147], 40.00th=[ 3359], 50.00th=[ 4555], 60.00th=[ 5276], 00:14:59.402 | 70.00th=[ 7963], 80.00th=[ 15533], 90.00th=[ 22414], 95.00th=[ 58983], 00:14:59.402 | 99.00th=[ 80217], 99.50th=[ 83362], 99.90th=[ 86508], 99.95th=[ 99091], 00:14:59.402 | 99.99th=[104334] 00:14:59.402 bw ( KiB/s): min= 840, max=48728, per=96.86%, avg=23837.27, stdev=14018.48, samples=22 00:14:59.402 iops : min= 210, max=12182, avg=5959.32, stdev=3504.62, samples=22 00:14:59.402 lat (usec) : 500=0.02%, 750=1.69%, 1000=4.22% 00:14:59.402 lat (msec) : 2=8.96%, 4=8.08%, 10=13.54%, 20=8.65%, 50=49.21% 00:14:59.402 lat (msec) : 100=4.48%, 250=1.11%, 500=0.04% 00:14:59.402 cpu : usr=99.28%, sys=0.27%, ctx=92, majf=0, minf=5597 00:14:59.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:59.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.402 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:59.402 issued rwts: total=65199,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:59.402 second_half: (groupid=0, jobs=1): err= 0: pid=71032: Tue Dec 3 10:42:27 2024 00:14:59.402 read: IOPS=2804, BW=11.0MiB/s (11.5MB/s)(255MiB/23293msec) 00:14:59.402 slat (nsec): min=2945, max=77778, avg=4894.06, stdev=1605.31 00:14:59.402 clat (usec): min=635, max=455230, avg=34196.88, stdev=24264.03 00:14:59.402 lat (usec): min=639, max=455235, avg=34201.77, stdev=24264.24 00:14:59.402 clat percentiles (msec): 00:14:59.402 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 27], 20.00th=[ 29], 00:14:59.402 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:14:59.402 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 47], 00:14:59.402 | 99.00th=[ 138], 99.50th=[ 157], 99.90th=[ 401], 99.95th=[ 439], 00:14:59.402 | 99.99th=[ 447] 00:14:59.402 write: IOPS=3076, BW=12.0MiB/s (12.6MB/s)(256MiB/21305msec); 0 zone resets 00:14:59.402 slat (usec): min=3, max=121, avg= 6.28, stdev= 2.95 00:14:59.402 clat (usec): min=365, max=107166, avg=11391.23, stdev=17221.05 00:14:59.402 lat (usec): min=374, max=107176, avg=11397.50, stdev=17221.26 00:14:59.402 clat percentiles (usec): 00:14:59.402 | 1.00th=[ 652], 5.00th=[ 758], 10.00th=[ 898], 20.00th=[ 1188], 00:14:59.402 | 30.00th=[ 2008], 40.00th=[ 3490], 50.00th=[ 5080], 60.00th=[ 6652], 00:14:59.402 | 70.00th=[ 10159], 80.00th=[ 16319], 90.00th=[ 27919], 95.00th=[ 60031], 00:14:59.402 | 99.00th=[ 81265], 99.50th=[ 84411], 99.90th=[ 96994], 99.95th=[103285], 00:14:59.402 | 99.99th=[106431] 00:14:59.402 bw ( KiB/s): min= 561, max=54000, per=76.12%, avg=18731.50, stdev=13328.40, samples=28 00:14:59.402 iops : min= 140, max=13500, avg=4682.79, stdev=3332.12, samples=28 00:14:59.402 lat (usec) : 500=0.02%, 750=2.34%, 1000=4.50% 00:14:59.402 lat (msec) : 2=8.26%, 4=6.82%, 10=14.70%, 20=8.66%, 50=49.21% 00:14:59.402 lat (msec) : 100=4.39%, 250=0.96%, 500=0.15% 00:14:59.402 cpu : usr=99.46%, sys=0.11%, ctx=40, majf=0, minf=5512 00:14:59.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:59.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.402 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:59.402 issued rwts: total=65321,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:59.402 00:14:59.402 Run status group 0 (all jobs): 00:14:59.402 READ: bw=21.9MiB/s (23.0MB/s), 11.0MiB/s-11.0MiB/s (11.5MB/s-11.6MB/s), io=510MiB (535MB), run=23092-23293msec 00:14:59.402 WRITE: bw=24.0MiB/s (25.2MB/s), 12.0MiB/s-14.3MiB/s (12.6MB/s-15.0MB/s), io=512MiB (537MB), run=17918-21305msec 00:14:59.402 ----------------------------------------------------- 00:14:59.402 Suppressions used: 00:14:59.402 count bytes template 00:14:59.402 2 10 /usr/src/fio/parse.c 00:14:59.402 2 192 /usr/src/fio/iolog.c 00:14:59.402 1 8 libtcmalloc_minimal.so 00:14:59.402 1 904 libcrypto.so 00:14:59.402 ----------------------------------------------------- 00:14:59.402 00:14:59.402 10:42:29 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:14:59.402 10:42:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:59.402 10:42:29 -- common/autotest_common.sh@10 -- # set +x 00:14:59.402 10:42:29 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:59.402 10:42:29 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:14:59.402 10:42:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:59.402 10:42:29 -- common/autotest_common.sh@10 -- # set +x 00:14:59.402 10:42:29 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:59.402 10:42:29 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:59.402 10:42:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:59.402 10:42:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.403 10:42:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:59.403 10:42:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.403 10:42:29 -- common/autotest_common.sh@1330 -- # shift 00:14:59.403 10:42:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:59.403 10:42:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.403 10:42:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.403 10:42:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:59.403 10:42:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:59.403 10:42:29 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.403 10:42:29 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.403 10:42:29 -- common/autotest_common.sh@1336 -- # break 00:14:59.403 10:42:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:59.403 10:42:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:59.661 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:59.661 fio-3.35 00:14:59.661 Starting 1 thread 00:15:14.575 00:15:14.575 test: (groupid=0, jobs=1): err= 0: pid=71344: Tue Dec 3 10:42:45 2024 00:15:14.575 read: IOPS=7822, BW=30.6MiB/s (32.0MB/s)(255MiB/8335msec) 00:15:14.575 slat (nsec): min=2984, max=61815, avg=5271.31, stdev=1955.36 00:15:14.575 clat (usec): min=521, max=29987, avg=16352.74, stdev=2236.57 00:15:14.575 lat (usec): min=525, max=29991, avg=16358.01, stdev=2237.12 00:15:14.575 clat percentiles (usec): 00:15:14.575 | 1.00th=[13960], 5.00th=[14484], 10.00th=[14615], 20.00th=[15008], 00:15:14.575 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15533], 60.00th=[15664], 00:15:14.575 | 70.00th=[16450], 80.00th=[17695], 90.00th=[19268], 95.00th=[21103], 00:15:14.575 | 99.00th=[24511], 99.50th=[25560], 99.90th=[27132], 99.95th=[27657], 00:15:14.575 | 99.99th=[29230] 00:15:14.575 write: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(256MiB/5699msec); 0 zone resets 00:15:14.575 slat (usec): min=4, max=131, avg= 7.65, stdev= 3.65 00:15:14.575 clat (usec): min=481, max=59468, avg=11085.60, stdev=11795.17 00:15:14.575 lat (usec): min=486, max=59475, avg=11093.25, stdev=11795.23 00:15:14.575 clat percentiles (usec): 00:15:14.575 | 1.00th=[ 742], 5.00th=[ 963], 10.00th=[ 1074], 20.00th=[ 1205], 00:15:14.575 | 30.00th=[ 1352], 40.00th=[ 1713], 50.00th=[ 8848], 60.00th=[10945], 00:15:14.575 | 70.00th=[13173], 80.00th=[15664], 90.00th=[34866], 95.00th=[36439], 00:15:14.575 | 99.00th=[39060], 99.50th=[41681], 99.90th=[45351], 99.95th=[50070], 00:15:14.575 | 99.99th=[57934] 00:15:14.575 bw ( KiB/s): min=16864, max=53672, per=94.98%, avg=43690.67, stdev=10038.02, samples=12 00:15:14.575 iops : min= 4216, max=13418, avg=10922.67, stdev=2509.50, samples=12 00:15:14.575 lat (usec) : 500=0.01%, 750=0.54%, 1000=2.65% 00:15:14.575 lat (msec) : 2=17.29%, 4=0.61%, 10=6.75%, 20=60.19%, 50=11.94% 00:15:14.575 lat (msec) : 100=0.03% 00:15:14.575 cpu : usr=99.16%, sys=0.18%, ctx=32, majf=0, minf=5567 00:15:14.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:14.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.575 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:14.575 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:14.575 00:15:14.575 Run status group 0 (all jobs): 00:15:14.576 READ: bw=30.6MiB/s (32.0MB/s), 30.6MiB/s-30.6MiB/s (32.0MB/s-32.0MB/s), io=255MiB (267MB), run=8335-8335msec 00:15:14.576 WRITE: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=256MiB (268MB), run=5699-5699msec 00:15:16.481 ----------------------------------------------------- 00:15:16.481 Suppressions used: 00:15:16.481 count bytes template 00:15:16.481 1 5 /usr/src/fio/parse.c 00:15:16.481 2 192 /usr/src/fio/iolog.c 00:15:16.481 1 8 libtcmalloc_minimal.so 00:15:16.481 1 904 libcrypto.so 00:15:16.481 ----------------------------------------------------- 00:15:16.481 00:15:16.481 10:42:46 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:16.481 10:42:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:16.481 10:42:46 -- common/autotest_common.sh@10 -- # set +x 00:15:16.481 10:42:46 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:16.481 10:42:46 -- ftl/fio.sh@85 -- # remove_shm 00:15:16.481 Remove shared memory files 00:15:16.481 10:42:46 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:16.481 10:42:46 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:16.481 10:42:46 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:16.481 10:42:46 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56184 /dev/shm/spdk_tgt_trace.pid69623 00:15:16.481 10:42:46 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:16.481 10:42:46 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:16.481 00:15:16.481 real 1m4.377s 00:15:16.481 user 2m18.289s 00:15:16.481 sys 0m2.952s 00:15:16.481 10:42:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:16.481 ************************************ 00:15:16.481 END TEST ftl_fio_basic 00:15:16.481 ************************************ 00:15:16.481 10:42:46 -- common/autotest_common.sh@10 -- # set +x 00:15:16.481 10:42:46 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:16.481 10:42:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:16.481 10:42:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.481 10:42:46 -- common/autotest_common.sh@10 -- # set +x 00:15:16.481 ************************************ 00:15:16.481 START TEST ftl_bdevperf 00:15:16.481 ************************************ 00:15:16.481 10:42:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:16.481 * Looking for test storage... 00:15:16.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:16.481 10:42:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:16.481 10:42:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:16.481 10:42:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:16.481 10:42:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:16.481 10:42:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:16.481 10:42:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:16.481 10:42:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:16.481 10:42:46 -- scripts/common.sh@335 -- # IFS=.-: 00:15:16.481 10:42:46 -- scripts/common.sh@335 -- # read -ra ver1 00:15:16.481 10:42:46 -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.481 10:42:46 -- scripts/common.sh@336 -- # read -ra ver2 00:15:16.481 10:42:46 -- scripts/common.sh@337 -- # local 'op=<' 00:15:16.481 10:42:46 -- scripts/common.sh@339 -- # ver1_l=2 00:15:16.481 10:42:46 -- scripts/common.sh@340 -- # ver2_l=1 00:15:16.481 10:42:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:16.481 10:42:46 -- scripts/common.sh@343 -- # case "$op" in 00:15:16.481 10:42:46 -- scripts/common.sh@344 -- # : 1 00:15:16.481 10:42:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:16.481 10:42:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.481 10:42:46 -- scripts/common.sh@364 -- # decimal 1 00:15:16.481 10:42:46 -- scripts/common.sh@352 -- # local d=1 00:15:16.481 10:42:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.481 10:42:46 -- scripts/common.sh@354 -- # echo 1 00:15:16.481 10:42:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:16.481 10:42:46 -- scripts/common.sh@365 -- # decimal 2 00:15:16.481 10:42:46 -- scripts/common.sh@352 -- # local d=2 00:15:16.481 10:42:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.481 10:42:46 -- scripts/common.sh@354 -- # echo 2 00:15:16.481 10:42:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:16.481 10:42:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:16.481 10:42:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:16.481 10:42:46 -- scripts/common.sh@367 -- # return 0 00:15:16.481 10:42:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.481 10:42:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:16.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.481 --rc genhtml_branch_coverage=1 00:15:16.481 --rc genhtml_function_coverage=1 00:15:16.481 --rc genhtml_legend=1 00:15:16.481 --rc geninfo_all_blocks=1 00:15:16.481 --rc geninfo_unexecuted_blocks=1 00:15:16.481 00:15:16.481 ' 00:15:16.481 10:42:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:16.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.482 --rc genhtml_branch_coverage=1 00:15:16.482 --rc genhtml_function_coverage=1 00:15:16.482 --rc genhtml_legend=1 00:15:16.482 --rc geninfo_all_blocks=1 00:15:16.482 --rc geninfo_unexecuted_blocks=1 00:15:16.482 00:15:16.482 ' 00:15:16.482 10:42:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:16.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.482 --rc genhtml_branch_coverage=1 00:15:16.482 --rc genhtml_function_coverage=1 00:15:16.482 --rc genhtml_legend=1 00:15:16.482 --rc geninfo_all_blocks=1 00:15:16.482 --rc geninfo_unexecuted_blocks=1 00:15:16.482 00:15:16.482 ' 00:15:16.482 10:42:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:16.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.482 --rc genhtml_branch_coverage=1 00:15:16.482 --rc genhtml_function_coverage=1 00:15:16.482 --rc genhtml_legend=1 00:15:16.482 --rc geninfo_all_blocks=1 00:15:16.482 --rc geninfo_unexecuted_blocks=1 00:15:16.482 00:15:16.482 ' 00:15:16.482 10:42:46 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:16.482 10:42:46 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:16.482 10:42:46 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:16.482 10:42:46 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:16.482 10:42:46 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:16.482 10:42:46 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:16.482 10:42:46 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.482 10:42:46 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:16.482 10:42:46 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:16.482 10:42:46 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:16.482 10:42:46 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:16.482 10:42:46 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:16.482 10:42:46 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:16.482 10:42:46 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:16.482 10:42:46 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:16.482 10:42:46 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:16.482 10:42:46 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:16.482 10:42:46 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:16.482 10:42:46 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:16.482 10:42:46 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:16.482 10:42:46 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:16.482 10:42:46 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:16.482 10:42:46 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:16.482 10:42:46 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:16.482 10:42:46 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:16.482 10:42:46 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:16.482 10:42:46 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:16.482 10:42:46 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:16.482 10:42:46 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:16.482 10:42:46 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:15:16.482 10:42:46 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:15:16.482 10:42:46 -- ftl/bdevperf.sh@13 -- # use_append= 00:15:16.482 10:42:46 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.482 10:42:46 -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:16.482 10:42:46 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:16.482 10:42:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:16.482 10:42:46 -- common/autotest_common.sh@10 -- # set +x 00:15:16.482 10:42:46 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=71589 00:15:16.482 10:42:46 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:16.482 10:42:46 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:16.482 10:42:46 -- ftl/bdevperf.sh@22 -- # waitforlisten 71589 00:15:16.482 10:42:46 -- common/autotest_common.sh@829 -- # '[' -z 71589 ']' 00:15:16.482 10:42:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.482 10:42:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.482 10:42:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.482 10:42:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.482 10:42:46 -- common/autotest_common.sh@10 -- # set +x 00:15:16.482 [2024-12-03 10:42:46.916238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:16.482 [2024-12-03 10:42:46.916448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71589 ] 00:15:16.482 [2024-12-03 10:42:47.057916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.740 [2024-12-03 10:42:47.226373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.333 10:42:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.333 10:42:47 -- common/autotest_common.sh@862 -- # return 0 00:15:17.333 10:42:47 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:17.333 10:42:47 -- ftl/common.sh@54 -- # local name=nvme0 00:15:17.333 10:42:47 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:17.333 10:42:47 -- ftl/common.sh@56 -- # local size=103424 00:15:17.333 10:42:47 -- ftl/common.sh@59 -- # local base_bdev 00:15:17.333 10:42:47 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:17.591 10:42:47 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:17.591 10:42:47 -- ftl/common.sh@62 -- # local base_size 00:15:17.591 10:42:47 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:17.591 10:42:47 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:17.591 10:42:47 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:17.591 10:42:47 -- common/autotest_common.sh@1369 -- # local bs 00:15:17.591 10:42:47 -- common/autotest_common.sh@1370 -- # local nb 00:15:17.591 10:42:47 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:17.591 10:42:48 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:17.591 { 00:15:17.591 "name": "nvme0n1", 00:15:17.591 "aliases": [ 00:15:17.591 "d93b350d-e8b4-4a71-8b85-4ba6cf79d140" 00:15:17.591 ], 00:15:17.591 "product_name": "NVMe disk", 00:15:17.591 "block_size": 4096, 00:15:17.591 "num_blocks": 1310720, 00:15:17.591 "uuid": "d93b350d-e8b4-4a71-8b85-4ba6cf79d140", 00:15:17.591 "assigned_rate_limits": { 00:15:17.591 "rw_ios_per_sec": 0, 00:15:17.591 "rw_mbytes_per_sec": 0, 00:15:17.591 "r_mbytes_per_sec": 0, 00:15:17.591 "w_mbytes_per_sec": 0 00:15:17.591 }, 00:15:17.591 "claimed": true, 00:15:17.591 "claim_type": "read_many_write_one", 00:15:17.591 "zoned": false, 00:15:17.591 "supported_io_types": { 00:15:17.591 "read": true, 00:15:17.591 "write": true, 00:15:17.591 "unmap": true, 00:15:17.591 "write_zeroes": true, 00:15:17.591 "flush": true, 00:15:17.591 "reset": true, 00:15:17.591 "compare": true, 00:15:17.591 "compare_and_write": false, 00:15:17.591 "abort": true, 00:15:17.591 "nvme_admin": true, 00:15:17.591 "nvme_io": true 00:15:17.591 }, 00:15:17.591 "driver_specific": { 00:15:17.591 "nvme": [ 00:15:17.591 { 00:15:17.591 "pci_address": "0000:00:07.0", 00:15:17.591 "trid": { 00:15:17.591 "trtype": "PCIe", 00:15:17.591 "traddr": "0000:00:07.0" 00:15:17.591 }, 00:15:17.591 "ctrlr_data": { 00:15:17.591 "cntlid": 0, 00:15:17.591 "vendor_id": "0x1b36", 00:15:17.591 "model_number": "QEMU NVMe Ctrl", 00:15:17.591 "serial_number": "12341", 00:15:17.591 "firmware_revision": "8.0.0", 00:15:17.591 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:17.591 "oacs": { 00:15:17.591 "security": 0, 00:15:17.591 "format": 1, 00:15:17.591 "firmware": 0, 00:15:17.591 "ns_manage": 1 00:15:17.591 }, 00:15:17.591 "multi_ctrlr": false, 00:15:17.591 "ana_reporting": false 00:15:17.591 }, 00:15:17.591 "vs": { 00:15:17.591 "nvme_version": "1.4" 00:15:17.591 }, 00:15:17.591 "ns_data": { 00:15:17.591 "id": 1, 00:15:17.591 "can_share": false 00:15:17.591 } 00:15:17.591 } 00:15:17.591 ], 00:15:17.591 "mp_policy": "active_passive" 00:15:17.591 } 00:15:17.591 } 00:15:17.591 ]' 00:15:17.591 10:42:48 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:17.591 10:42:48 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:17.591 10:42:48 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:17.849 10:42:48 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:17.849 10:42:48 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:17.849 10:42:48 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:17.849 10:42:48 -- ftl/common.sh@63 -- # base_size=5120 00:15:17.849 10:42:48 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:17.849 10:42:48 -- ftl/common.sh@67 -- # clear_lvols 00:15:17.849 10:42:48 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:17.849 10:42:48 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:17.849 10:42:48 -- ftl/common.sh@28 -- # stores=3a0537fe-1267-4190-9664-774c96a8affb 00:15:17.849 10:42:48 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:17.849 10:42:48 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3a0537fe-1267-4190-9664-774c96a8affb 00:15:18.106 10:42:48 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:18.364 10:42:48 -- ftl/common.sh@68 -- # lvs=412df3f0-48f3-49b5-a204-e35644742869 00:15:18.364 10:42:48 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 412df3f0-48f3-49b5-a204-e35644742869 00:15:18.364 10:42:48 -- ftl/bdevperf.sh@23 -- # split_bdev=fd1bd26e-1e66-4052-ae3d-43a60f32d3a7 00:15:18.364 10:42:48 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 fd1bd26e-1e66-4052-ae3d-43a60f32d3a7 00:15:18.364 10:42:48 -- ftl/common.sh@35 -- # local name=nvc0 00:15:18.364 10:42:48 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:18.364 10:42:48 -- ftl/common.sh@37 -- # local base_bdev=fd1bd26e-1e66-4052-ae3d-43a60f32d3a7 00:15:18.364 10:42:48 -- ftl/common.sh@38 -- # local cache_size= 00:15:18.622 10:42:48 -- ftl/common.sh@41 -- # get_bdev_size fd1bd26e-1e66-4052-ae3d-43a60f32d3a7 00:15:18.622 10:42:48 -- common/autotest_common.sh@1367 -- # local bdev_name=fd1bd26e-1e66-4052-ae3d-43a60f32d3a7 00:15:18.622 10:42:48 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:18.622 10:42:48 -- common/autotest_common.sh@1369 -- # local bs 00:15:18.622 10:42:48 -- common/autotest_common.sh@1370 -- # local nb 00:15:18.622 10:42:48 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd1bd26e-1e66-4052-ae3d-43a60f32d3a7 00:15:18.622 10:42:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:18.622 { 00:15:18.622 "name": "fd1bd26e-1e66-4052-ae3d-43a60f32d3a7", 00:15:18.622 "aliases": [ 00:15:18.622 "lvs/nvme0n1p0" 00:15:18.622 ], 00:15:18.622 "product_name": "Logical Volume", 00:15:18.622 "block_size": 4096, 00:15:18.622 "num_blocks": 26476544, 00:15:18.622 "uuid": "fd1bd26e-1e66-4052-ae3d-43a60f32d3a7", 00:15:18.622 "assigned_rate_limits": { 00:15:18.622 "rw_ios_per_sec": 0, 00:15:18.622 "rw_mbytes_per_sec": 0, 00:15:18.622 "r_mbytes_per_sec": 0, 00:15:18.622 "w_mbytes_per_sec": 0 00:15:18.622 }, 00:15:18.622 "claimed": false, 00:15:18.622 "zoned": false, 00:15:18.622 "supported_io_types": { 00:15:18.622 "read": true, 00:15:18.622 "write": true, 00:15:18.622 "unmap": true, 00:15:18.622 "write_zeroes": true, 00:15:18.622 "flush": false, 00:15:18.622 "reset": true, 00:15:18.622 "compare": false, 00:15:18.622 "compare_and_write": false, 00:15:18.622 "abort": false, 00:15:18.622 "nvme_admin": false, 00:15:18.622 "nvme_io": false 00:15:18.622 }, 00:15:18.622 "driver_specific": { 00:15:18.622 "lvol": { 00:15:18.622 "lvol_store_uuid": "412df3f0-48f3-49b5-a204-e35644742869", 00:15:18.622 "base_bdev": "nvme0n1", 00:15:18.622 "thin_provision": true, 00:15:18.622 "snapshot": false, 00:15:18.622 "clone": false, 00:15:18.622 "esnap_clone": false 00:15:18.622 } 00:15:18.622 } 00:15:18.622 } 00:15:18.622 ]' 00:15:18.622 10:42:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:18.622 10:42:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:18.622 10:42:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:18.622 10:42:49 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:18.622 10:42:49 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:18.622 10:42:49 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:18.622 10:42:49 -- ftl/common.sh@41 -- # local base_size=5171 00:15:18.622 10:42:49 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:18.622 10:42:49 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:18.880 10:42:49 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:18.880 10:42:49 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:18.880 10:42:49 -- ftl/common.sh@48 -- # get_bdev_size fd1bd26e-1e66-4052-ae3d-43a60f32d3a7 00:15:18.880 10:42:49 -- common/autotest_common.sh@1367 -- # local bdev_name=fd1bd26e-1e66-4052-ae3d-43a60f32d3a7 00:15:18.880 10:42:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:18.880 10:42:49 -- common/autotest_common.sh@1369 -- # local bs 00:15:18.880 10:42:49 -- common/autotest_common.sh@1370 -- # local nb 00:15:18.880 10:42:49 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd1bd26e-1e66-4052-ae3d-43a60f32d3a7 00:15:19.138 10:42:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:19.138 { 00:15:19.138 "name": "fd1bd26e-1e66-4052-ae3d-43a60f32d3a7", 00:15:19.138 "aliases": [ 00:15:19.138 "lvs/nvme0n1p0" 00:15:19.138 ], 00:15:19.138 "product_name": "Logical Volume", 00:15:19.138 "block_size": 4096, 00:15:19.138 "num_blocks": 26476544, 00:15:19.138 "uuid": "fd1bd26e-1e66-4052-ae3d-43a60f32d3a7", 00:15:19.138 "assigned_rate_limits": { 00:15:19.138 "rw_ios_per_sec": 0, 00:15:19.138 "rw_mbytes_per_sec": 0, 00:15:19.138 "r_mbytes_per_sec": 0, 00:15:19.138 "w_mbytes_per_sec": 0 00:15:19.138 }, 00:15:19.138 "claimed": false, 00:15:19.138 "zoned": false, 00:15:19.138 "supported_io_types": { 00:15:19.138 "read": true, 00:15:19.138 "write": true, 00:15:19.138 "unmap": true, 00:15:19.138 "write_zeroes": true, 00:15:19.138 "flush": false, 00:15:19.138 "reset": true, 00:15:19.138 "compare": false, 00:15:19.138 "compare_and_write": false, 00:15:19.138 "abort": false, 00:15:19.138 "nvme_admin": false, 00:15:19.138 "nvme_io": false 00:15:19.138 }, 00:15:19.138 "driver_specific": { 00:15:19.138 "lvol": { 00:15:19.138 "lvol_store_uuid": "412df3f0-48f3-49b5-a204-e35644742869", 00:15:19.138 "base_bdev": "nvme0n1", 00:15:19.138 "thin_provision": true, 00:15:19.138 "snapshot": false, 00:15:19.138 "clone": false, 00:15:19.138 "esnap_clone": false 00:15:19.138 } 00:15:19.138 } 00:15:19.138 } 00:15:19.138 ]' 00:15:19.138 10:42:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:19.138 10:42:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:19.138 10:42:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:19.138 10:42:49 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:19.138 10:42:49 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:19.138 10:42:49 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:19.138 10:42:49 -- ftl/common.sh@48 -- # cache_size=5171 00:15:19.138 10:42:49 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:19.396 10:42:49 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:15:19.396 10:42:49 -- ftl/bdevperf.sh@26 -- # get_bdev_size fd1bd26e-1e66-4052-ae3d-43a60f32d3a7 00:15:19.396 10:42:49 -- common/autotest_common.sh@1367 -- # local bdev_name=fd1bd26e-1e66-4052-ae3d-43a60f32d3a7 00:15:19.396 10:42:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:19.396 10:42:49 -- common/autotest_common.sh@1369 -- # local bs 00:15:19.396 10:42:49 -- common/autotest_common.sh@1370 -- # local nb 00:15:19.396 10:42:49 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd1bd26e-1e66-4052-ae3d-43a60f32d3a7 00:15:19.654 10:42:50 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:19.654 { 00:15:19.654 "name": "fd1bd26e-1e66-4052-ae3d-43a60f32d3a7", 00:15:19.654 "aliases": [ 00:15:19.654 "lvs/nvme0n1p0" 00:15:19.654 ], 00:15:19.654 "product_name": "Logical Volume", 00:15:19.654 "block_size": 4096, 00:15:19.654 "num_blocks": 26476544, 00:15:19.654 "uuid": "fd1bd26e-1e66-4052-ae3d-43a60f32d3a7", 00:15:19.654 "assigned_rate_limits": { 00:15:19.654 "rw_ios_per_sec": 0, 00:15:19.654 "rw_mbytes_per_sec": 0, 00:15:19.654 "r_mbytes_per_sec": 0, 00:15:19.654 "w_mbytes_per_sec": 0 00:15:19.654 }, 00:15:19.654 "claimed": false, 00:15:19.654 "zoned": false, 00:15:19.654 "supported_io_types": { 00:15:19.654 "read": true, 00:15:19.654 "write": true, 00:15:19.654 "unmap": true, 00:15:19.654 "write_zeroes": true, 00:15:19.654 "flush": false, 00:15:19.654 "reset": true, 00:15:19.654 "compare": false, 00:15:19.654 "compare_and_write": false, 00:15:19.654 "abort": false, 00:15:19.654 "nvme_admin": false, 00:15:19.654 "nvme_io": false 00:15:19.654 }, 00:15:19.654 "driver_specific": { 00:15:19.654 "lvol": { 00:15:19.654 "lvol_store_uuid": "412df3f0-48f3-49b5-a204-e35644742869", 00:15:19.654 "base_bdev": "nvme0n1", 00:15:19.654 "thin_provision": true, 00:15:19.654 "snapshot": false, 00:15:19.654 "clone": false, 00:15:19.654 "esnap_clone": false 00:15:19.654 } 00:15:19.654 } 00:15:19.654 } 00:15:19.654 ]' 00:15:19.654 10:42:50 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:19.654 10:42:50 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:19.654 10:42:50 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:19.654 10:42:50 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:19.654 10:42:50 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:19.654 10:42:50 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:19.654 10:42:50 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:15:19.655 10:42:50 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fd1bd26e-1e66-4052-ae3d-43a60f32d3a7 -c nvc0n1p0 --l2p_dram_limit 20 00:15:19.916 [2024-12-03 10:42:50.353635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.916 [2024-12-03 10:42:50.353677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:19.916 [2024-12-03 10:42:50.353690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:19.916 [2024-12-03 10:42:50.353697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.916 [2024-12-03 10:42:50.353734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.916 [2024-12-03 10:42:50.353742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:19.916 [2024-12-03 10:42:50.353751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:15:19.916 [2024-12-03 10:42:50.353756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.916 [2024-12-03 10:42:50.353771] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:19.916 [2024-12-03 10:42:50.354348] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:19.916 [2024-12-03 10:42:50.354366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.916 [2024-12-03 10:42:50.354373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:19.916 [2024-12-03 10:42:50.354382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:15:19.916 [2024-12-03 10:42:50.354388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.916 [2024-12-03 10:42:50.354409] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 824b5e62-051c-470d-b9f2-4b8dc0bc65db 00:15:19.916 [2024-12-03 10:42:50.355654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.916 [2024-12-03 10:42:50.355678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:19.916 [2024-12-03 10:42:50.355687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:19.916 [2024-12-03 10:42:50.355696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.916 [2024-12-03 10:42:50.362697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.916 [2024-12-03 10:42:50.362725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:19.916 [2024-12-03 10:42:50.362733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.968 ms 00:15:19.916 [2024-12-03 10:42:50.362741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.916 [2024-12-03 10:42:50.362836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.916 [2024-12-03 10:42:50.362846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:19.916 [2024-12-03 10:42:50.362853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:19.916 [2024-12-03 10:42:50.362863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.916 [2024-12-03 10:42:50.362899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.916 [2024-12-03 10:42:50.362909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:19.916 [2024-12-03 10:42:50.362916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:19.916 [2024-12-03 10:42:50.362924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.916 [2024-12-03 10:42:50.362940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:19.916 [2024-12-03 10:42:50.366282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.916 [2024-12-03 10:42:50.366305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:19.916 [2024-12-03 10:42:50.366314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.345 ms 00:15:19.916 [2024-12-03 10:42:50.366320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.916 [2024-12-03 10:42:50.366347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.916 [2024-12-03 10:42:50.366354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:19.916 [2024-12-03 10:42:50.366361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:19.916 [2024-12-03 10:42:50.366366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.916 [2024-12-03 10:42:50.366379] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:19.916 [2024-12-03 10:42:50.366474] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:19.916 [2024-12-03 10:42:50.366487] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:19.916 [2024-12-03 10:42:50.366496] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:19.916 [2024-12-03 10:42:50.366506] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:19.916 [2024-12-03 10:42:50.366514] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:19.916 [2024-12-03 10:42:50.366521] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:19.917 [2024-12-03 10:42:50.366527] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:19.917 [2024-12-03 10:42:50.366537] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:19.917 [2024-12-03 10:42:50.366543] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:19.917 [2024-12-03 10:42:50.366550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.917 [2024-12-03 10:42:50.366556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:19.917 [2024-12-03 10:42:50.366563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:15:19.917 [2024-12-03 10:42:50.366569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.917 [2024-12-03 10:42:50.366616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.917 [2024-12-03 10:42:50.366622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:19.917 [2024-12-03 10:42:50.366629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:19.917 [2024-12-03 10:42:50.366634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.917 [2024-12-03 10:42:50.366688] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:19.917 [2024-12-03 10:42:50.366696] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:19.917 [2024-12-03 10:42:50.366703] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:19.917 [2024-12-03 10:42:50.366713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:19.917 [2024-12-03 10:42:50.366721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:19.917 [2024-12-03 10:42:50.366726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:19.917 [2024-12-03 10:42:50.366732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:19.917 [2024-12-03 10:42:50.366737] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:19.917 [2024-12-03 10:42:50.366745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:19.917 [2024-12-03 10:42:50.366750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:19.917 [2024-12-03 10:42:50.366764] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:19.917 [2024-12-03 10:42:50.366770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:19.917 [2024-12-03 10:42:50.366777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:19.917 [2024-12-03 10:42:50.366782] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:19.917 [2024-12-03 10:42:50.366788] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:19.917 [2024-12-03 10:42:50.366793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:19.917 [2024-12-03 10:42:50.366801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:19.917 [2024-12-03 10:42:50.366806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:19.917 [2024-12-03 10:42:50.366812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:19.917 [2024-12-03 10:42:50.366818] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:19.917 [2024-12-03 10:42:50.366825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:19.917 [2024-12-03 10:42:50.366830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:19.917 [2024-12-03 10:42:50.366837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:19.917 [2024-12-03 10:42:50.366841] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:19.917 [2024-12-03 10:42:50.366848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:19.917 [2024-12-03 10:42:50.366852] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:19.917 [2024-12-03 10:42:50.366859] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:19.917 [2024-12-03 10:42:50.366864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:19.917 [2024-12-03 10:42:50.366870] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:19.917 [2024-12-03 10:42:50.366875] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:19.917 [2024-12-03 10:42:50.366880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:19.917 [2024-12-03 10:42:50.366885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:19.917 [2024-12-03 10:42:50.366894] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:19.917 [2024-12-03 10:42:50.366898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:19.917 [2024-12-03 10:42:50.366905] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:19.917 [2024-12-03 10:42:50.366909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:19.917 [2024-12-03 10:42:50.366917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:19.917 [2024-12-03 10:42:50.366922] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:19.917 [2024-12-03 10:42:50.366928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:19.917 [2024-12-03 10:42:50.366933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:19.917 [2024-12-03 10:42:50.366939] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:19.917 [2024-12-03 10:42:50.366945] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:19.917 [2024-12-03 10:42:50.366955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:19.917 [2024-12-03 10:42:50.366961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:19.917 [2024-12-03 10:42:50.366968] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:19.917 [2024-12-03 10:42:50.366973] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:19.917 [2024-12-03 10:42:50.366979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:19.917 [2024-12-03 10:42:50.366985] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:19.917 [2024-12-03 10:42:50.366994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:19.917 [2024-12-03 10:42:50.366998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:19.917 [2024-12-03 10:42:50.367005] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:19.917 [2024-12-03 10:42:50.367013] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:19.917 [2024-12-03 10:42:50.367022] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:19.917 [2024-12-03 10:42:50.367029] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:19.917 [2024-12-03 10:42:50.367035] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:19.917 [2024-12-03 10:42:50.367040] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:19.917 [2024-12-03 10:42:50.367047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:19.917 [2024-12-03 10:42:50.367065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:19.917 [2024-12-03 10:42:50.367072] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:19.917 [2024-12-03 10:42:50.367078] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:19.917 [2024-12-03 10:42:50.367085] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:19.917 [2024-12-03 10:42:50.367090] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:19.917 [2024-12-03 10:42:50.367098] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:19.917 [2024-12-03 10:42:50.367103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:19.917 [2024-12-03 10:42:50.367112] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:19.917 [2024-12-03 10:42:50.367118] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:19.917 [2024-12-03 10:42:50.367125] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:19.917 [2024-12-03 10:42:50.367132] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:19.917 [2024-12-03 10:42:50.367139] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:19.917 [2024-12-03 10:42:50.367144] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:19.917 [2024-12-03 10:42:50.367151] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:19.917 [2024-12-03 10:42:50.367157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.917 [2024-12-03 10:42:50.367166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:19.917 [2024-12-03 10:42:50.367172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:15:19.917 [2024-12-03 10:42:50.367181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.917 [2024-12-03 10:42:50.381128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.917 [2024-12-03 10:42:50.381264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:19.917 [2024-12-03 10:42:50.381277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.912 ms 00:15:19.917 [2024-12-03 10:42:50.381287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.917 [2024-12-03 10:42:50.381355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.917 [2024-12-03 10:42:50.381365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:19.917 [2024-12-03 10:42:50.381372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:19.917 [2024-12-03 10:42:50.381381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.917 [2024-12-03 10:42:50.425895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.917 [2024-12-03 10:42:50.425927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:19.917 [2024-12-03 10:42:50.425938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.482 ms 00:15:19.917 [2024-12-03 10:42:50.425948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.917 [2024-12-03 10:42:50.425977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.917 [2024-12-03 10:42:50.425988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:19.917 [2024-12-03 10:42:50.425995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:19.918 [2024-12-03 10:42:50.426003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.918 [2024-12-03 10:42:50.426424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.918 [2024-12-03 10:42:50.426540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:19.918 [2024-12-03 10:42:50.426552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:15:19.918 [2024-12-03 10:42:50.426560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.918 [2024-12-03 10:42:50.426655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.918 [2024-12-03 10:42:50.426665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:19.918 [2024-12-03 10:42:50.426674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:15:19.918 [2024-12-03 10:42:50.426681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.918 [2024-12-03 10:42:50.439340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.918 [2024-12-03 10:42:50.439364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:19.918 [2024-12-03 10:42:50.439374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.647 ms 00:15:19.918 [2024-12-03 10:42:50.439381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.918 [2024-12-03 10:42:50.449481] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:19.918 [2024-12-03 10:42:50.454888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.918 [2024-12-03 10:42:50.454910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:19.918 [2024-12-03 10:42:50.454921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.448 ms 00:15:19.918 [2024-12-03 10:42:50.454928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.179 [2024-12-03 10:42:50.539040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.179 [2024-12-03 10:42:50.539099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:20.179 [2024-12-03 10:42:50.539116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.090 ms 00:15:20.179 [2024-12-03 10:42:50.539128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.179 [2024-12-03 10:42:50.539171] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:20.179 [2024-12-03 10:42:50.539183] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:24.371 [2024-12-03 10:42:54.206802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.371 [2024-12-03 10:42:54.206911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:24.371 [2024-12-03 10:42:54.206937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3667.601 ms 00:15:24.371 [2024-12-03 10:42:54.206948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.371 [2024-12-03 10:42:54.207226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.371 [2024-12-03 10:42:54.207241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:24.371 [2024-12-03 10:42:54.207254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:15:24.371 [2024-12-03 10:42:54.207264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.371 [2024-12-03 10:42:54.234047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.371 [2024-12-03 10:42:54.234114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:24.371 [2024-12-03 10:42:54.234133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.724 ms 00:15:24.371 [2024-12-03 10:42:54.234147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.371 [2024-12-03 10:42:54.259353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.371 [2024-12-03 10:42:54.259401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:24.371 [2024-12-03 10:42:54.259421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.151 ms 00:15:24.371 [2024-12-03 10:42:54.259429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.371 [2024-12-03 10:42:54.259820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.371 [2024-12-03 10:42:54.259838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:24.371 [2024-12-03 10:42:54.259850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:15:24.371 [2024-12-03 10:42:54.259859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.371 [2024-12-03 10:42:54.336294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.371 [2024-12-03 10:42:54.336347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:24.371 [2024-12-03 10:42:54.336365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.391 ms 00:15:24.371 [2024-12-03 10:42:54.336375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.371 [2024-12-03 10:42:54.365215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.371 [2024-12-03 10:42:54.365264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:24.371 [2024-12-03 10:42:54.365281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.783 ms 00:15:24.371 [2024-12-03 10:42:54.365291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.371 [2024-12-03 10:42:54.366862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.371 [2024-12-03 10:42:54.366911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:24.371 [2024-12-03 10:42:54.366928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.520 ms 00:15:24.371 [2024-12-03 10:42:54.366941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.371 [2024-12-03 10:42:54.393714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.371 [2024-12-03 10:42:54.393759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:24.371 [2024-12-03 10:42:54.393775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.726 ms 00:15:24.371 [2024-12-03 10:42:54.393783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.371 [2024-12-03 10:42:54.393838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.371 [2024-12-03 10:42:54.393849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:24.371 [2024-12-03 10:42:54.393865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:24.371 [2024-12-03 10:42:54.393874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.371 [2024-12-03 10:42:54.393976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.371 [2024-12-03 10:42:54.393989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:24.371 [2024-12-03 10:42:54.394001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:24.371 [2024-12-03 10:42:54.394009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.371 [2024-12-03 10:42:54.395394] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4041.157 ms, result 0 00:15:24.371 { 00:15:24.371 "name": "ftl0", 00:15:24.371 "uuid": "824b5e62-051c-470d-b9f2-4b8dc0bc65db" 00:15:24.371 } 00:15:24.371 10:42:54 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:24.371 10:42:54 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:15:24.371 10:42:54 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:15:24.371 10:42:54 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:24.371 [2024-12-03 10:42:54.702873] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:24.371 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:24.371 Zero copy mechanism will not be used. 00:15:24.371 Running I/O for 4 seconds... 00:15:28.574 00:15:28.574 Latency(us) 00:15:28.574 [2024-12-03T10:42:59.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.574 [2024-12-03T10:42:59.187Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:28.574 ftl0 : 4.00 759.54 50.44 0.00 0.00 1393.11 403.30 2571.03 00:15:28.574 [2024-12-03T10:42:59.187Z] =================================================================================================================== 00:15:28.574 [2024-12-03T10:42:59.187Z] Total : 759.54 50.44 0.00 0.00 1393.11 403.30 2571.03 00:15:28.574 0 00:15:28.574 [2024-12-03 10:42:58.710145] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:28.574 10:42:58 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:28.574 [2024-12-03 10:42:58.815013] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:28.574 Running I/O for 4 seconds... 00:15:32.777 00:15:32.777 Latency(us) 00:15:32.777 [2024-12-03T10:43:03.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.777 [2024-12-03T10:43:03.390Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:15:32.777 ftl0 : 4.03 5763.03 22.51 0.00 0.00 22128.20 395.42 44967.78 00:15:32.777 [2024-12-03T10:43:03.390Z] =================================================================================================================== 00:15:32.777 [2024-12-03T10:43:03.390Z] Total : 5763.03 22.51 0.00 0.00 22128.20 0.00 44967.78 00:15:32.777 [2024-12-03 10:43:02.852081] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft0 00:15:32.777 l0 00:15:32.777 10:43:02 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:15:32.777 [2024-12-03 10:43:02.966010] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:32.777 Running I/O for 4 seconds... 00:15:36.980 00:15:36.980 Latency(us) 00:15:36.980 [2024-12-03T10:43:07.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.980 [2024-12-03T10:43:07.593Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:36.980 Verification LBA range: start 0x0 length 0x1400000 00:15:36.980 ftl0 : 4.01 8141.42 31.80 0.00 0.00 15681.89 223.70 29642.44 00:15:36.980 [2024-12-03T10:43:07.593Z] =================================================================================================================== 00:15:36.980 [2024-12-03T10:43:07.593Z] Total : 8141.42 31.80 0.00 0.00 15681.89 0.00 29642.44 00:15:36.980 0 00:15:36.980 [2024-12-03 10:43:06.993543] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:36.980 10:43:07 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:15:36.980 [2024-12-03 10:43:07.192877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.980 [2024-12-03 10:43:07.192934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:36.980 [2024-12-03 10:43:07.192952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:36.980 [2024-12-03 10:43:07.192962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.980 [2024-12-03 10:43:07.192987] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:36.980 [2024-12-03 10:43:07.196110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.980 [2024-12-03 10:43:07.196165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:36.980 [2024-12-03 10:43:07.196180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.107 ms 00:15:36.980 [2024-12-03 10:43:07.196196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.980 [2024-12-03 10:43:07.199687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.980 [2024-12-03 10:43:07.199940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:36.980 [2024-12-03 10:43:07.199965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.462 ms 00:15:36.980 [2024-12-03 10:43:07.199977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.980 [2024-12-03 10:43:07.410585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.980 [2024-12-03 10:43:07.410789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:36.980 [2024-12-03 10:43:07.410816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 210.583 ms 00:15:36.980 [2024-12-03 10:43:07.410829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.980 [2024-12-03 10:43:07.416987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.980 [2024-12-03 10:43:07.417036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:36.980 [2024-12-03 10:43:07.417049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.120 ms 00:15:36.980 [2024-12-03 10:43:07.417222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.980 [2024-12-03 10:43:07.438934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.980 [2024-12-03 10:43:07.438980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:36.980 [2024-12-03 10:43:07.438991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.631 ms 00:15:36.980 [2024-12-03 10:43:07.439004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.980 [2024-12-03 10:43:07.454518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.980 [2024-12-03 10:43:07.454559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:36.980 [2024-12-03 10:43:07.454571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.475 ms 00:15:36.980 [2024-12-03 10:43:07.454580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.980 [2024-12-03 10:43:07.454707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.980 [2024-12-03 10:43:07.454719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:36.980 [2024-12-03 10:43:07.454727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:15:36.980 [2024-12-03 10:43:07.454736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.980 [2024-12-03 10:43:07.474822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.980 [2024-12-03 10:43:07.474858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:36.980 [2024-12-03 10:43:07.474867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.072 ms 00:15:36.980 [2024-12-03 10:43:07.474876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.980 [2024-12-03 10:43:07.493871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.980 [2024-12-03 10:43:07.493904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:36.980 [2024-12-03 10:43:07.493913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.964 ms 00:15:36.980 [2024-12-03 10:43:07.493925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.980 [2024-12-03 10:43:07.512230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.980 [2024-12-03 10:43:07.512347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:36.980 [2024-12-03 10:43:07.512361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.276 ms 00:15:36.980 [2024-12-03 10:43:07.512368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.980 [2024-12-03 10:43:07.530118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.980 [2024-12-03 10:43:07.530148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:36.980 [2024-12-03 10:43:07.530156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.698 ms 00:15:36.981 [2024-12-03 10:43:07.530164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.981 [2024-12-03 10:43:07.530191] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:36.981 [2024-12-03 10:43:07.530205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:36.981 [2024-12-03 10:43:07.530819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:36.982 [2024-12-03 10:43:07.530826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:36.982 [2024-12-03 10:43:07.530832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:36.982 [2024-12-03 10:43:07.530839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:36.982 [2024-12-03 10:43:07.530845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:36.982 [2024-12-03 10:43:07.530852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:36.982 [2024-12-03 10:43:07.530863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:36.982 [2024-12-03 10:43:07.530870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:36.982 [2024-12-03 10:43:07.530876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:36.982 [2024-12-03 10:43:07.530884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:36.982 [2024-12-03 10:43:07.530890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:36.982 [2024-12-03 10:43:07.530897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:36.982 [2024-12-03 10:43:07.530902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:36.982 [2024-12-03 10:43:07.530917] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:36.982 [2024-12-03 10:43:07.530923] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 824b5e62-051c-470d-b9f2-4b8dc0bc65db 00:15:36.982 [2024-12-03 10:43:07.530934] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:36.982 [2024-12-03 10:43:07.530940] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:36.982 [2024-12-03 10:43:07.530947] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:36.982 [2024-12-03 10:43:07.530953] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:36.982 [2024-12-03 10:43:07.530960] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:36.982 [2024-12-03 10:43:07.530969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:36.982 [2024-12-03 10:43:07.530976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:36.982 [2024-12-03 10:43:07.530981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:36.982 [2024-12-03 10:43:07.530988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:36.982 [2024-12-03 10:43:07.530992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.982 [2024-12-03 10:43:07.531000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:36.982 [2024-12-03 10:43:07.531007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:15:36.982 [2024-12-03 10:43:07.531014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.982 [2024-12-03 10:43:07.541501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.982 [2024-12-03 10:43:07.541590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:36.982 [2024-12-03 10:43:07.541630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.465 ms 00:15:36.982 [2024-12-03 10:43:07.541655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.982 [2024-12-03 10:43:07.541823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.982 [2024-12-03 10:43:07.541841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:36.982 [2024-12-03 10:43:07.541857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:15:36.982 [2024-12-03 10:43:07.541873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.982 [2024-12-03 10:43:07.573515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.982 [2024-12-03 10:43:07.573604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:36.982 [2024-12-03 10:43:07.573647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.982 [2024-12-03 10:43:07.573666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.982 [2024-12-03 10:43:07.573728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.982 [2024-12-03 10:43:07.573745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:36.982 [2024-12-03 10:43:07.573761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.982 [2024-12-03 10:43:07.573777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.982 [2024-12-03 10:43:07.573836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.982 [2024-12-03 10:43:07.573873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:36.982 [2024-12-03 10:43:07.573888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.982 [2024-12-03 10:43:07.573908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.982 [2024-12-03 10:43:07.573929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.982 [2024-12-03 10:43:07.573946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:36.982 [2024-12-03 10:43:07.573961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.982 [2024-12-03 10:43:07.574000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.240 [2024-12-03 10:43:07.633839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.240 [2024-12-03 10:43:07.633963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:37.240 [2024-12-03 10:43:07.634008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.240 [2024-12-03 10:43:07.634030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.240 [2024-12-03 10:43:07.657858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.240 [2024-12-03 10:43:07.657958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:37.240 [2024-12-03 10:43:07.658135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.240 [2024-12-03 10:43:07.658156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.240 [2024-12-03 10:43:07.658222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.240 [2024-12-03 10:43:07.658301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:37.240 [2024-12-03 10:43:07.658342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.240 [2024-12-03 10:43:07.658364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.240 [2024-12-03 10:43:07.658413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.240 [2024-12-03 10:43:07.658433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:37.240 [2024-12-03 10:43:07.658448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.240 [2024-12-03 10:43:07.658464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.240 [2024-12-03 10:43:07.658556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.240 [2024-12-03 10:43:07.658633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:37.240 [2024-12-03 10:43:07.658649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.240 [2024-12-03 10:43:07.658665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.240 [2024-12-03 10:43:07.658701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.240 [2024-12-03 10:43:07.658722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:37.240 [2024-12-03 10:43:07.658764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.240 [2024-12-03 10:43:07.658783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.240 [2024-12-03 10:43:07.658827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.240 [2024-12-03 10:43:07.658875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:37.240 [2024-12-03 10:43:07.658893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.240 [2024-12-03 10:43:07.658910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.240 [2024-12-03 10:43:07.658985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.240 [2024-12-03 10:43:07.659006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:37.240 [2024-12-03 10:43:07.659021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.240 [2024-12-03 10:43:07.659037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.240 [2024-12-03 10:43:07.659180] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 466.269 ms, result 0 00:15:37.240 true 00:15:37.240 10:43:07 -- ftl/bdevperf.sh@37 -- # killprocess 71589 00:15:37.240 10:43:07 -- common/autotest_common.sh@936 -- # '[' -z 71589 ']' 00:15:37.240 10:43:07 -- common/autotest_common.sh@940 -- # kill -0 71589 00:15:37.240 10:43:07 -- common/autotest_common.sh@941 -- # uname 00:15:37.240 10:43:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:37.240 10:43:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71589 00:15:37.240 killing process with pid 71589 00:15:37.240 Received shutdown signal, test time was about 4.000000 seconds 00:15:37.240 00:15:37.240 Latency(us) 00:15:37.240 [2024-12-03T10:43:07.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.240 [2024-12-03T10:43:07.853Z] =================================================================================================================== 00:15:37.240 [2024-12-03T10:43:07.853Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:37.240 10:43:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:37.240 10:43:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:37.240 10:43:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71589' 00:15:37.241 10:43:07 -- common/autotest_common.sh@955 -- # kill 71589 00:15:37.241 10:43:07 -- common/autotest_common.sh@960 -- # wait 71589 00:15:37.875 10:43:08 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:15:37.875 10:43:08 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:37.875 10:43:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:37.875 10:43:08 -- common/autotest_common.sh@10 -- # set +x 00:15:37.875 10:43:08 -- ftl/bdevperf.sh@41 -- # remove_shm 00:15:37.876 10:43:08 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:37.876 Remove shared memory files 00:15:37.876 10:43:08 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:37.876 10:43:08 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:37.876 10:43:08 -- ftl/common.sh@207 -- # rm -f rm -f 00:15:37.876 10:43:08 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:37.876 10:43:08 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:37.876 ************************************ 00:15:37.876 END TEST ftl_bdevperf 00:15:37.876 ************************************ 00:15:37.876 00:15:37.876 real 0m21.708s 00:15:37.876 user 0m24.048s 00:15:37.876 sys 0m0.870s 00:15:37.876 10:43:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:37.876 10:43:08 -- common/autotest_common.sh@10 -- # set +x 00:15:37.876 10:43:08 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:15:37.876 10:43:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:37.876 10:43:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:37.876 10:43:08 -- common/autotest_common.sh@10 -- # set +x 00:15:38.154 ************************************ 00:15:38.154 START TEST ftl_trim 00:15:38.154 ************************************ 00:15:38.154 10:43:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:15:38.154 * Looking for test storage... 00:15:38.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:38.154 10:43:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:38.154 10:43:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:38.154 10:43:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:38.154 10:43:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:38.154 10:43:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:38.154 10:43:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:38.154 10:43:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:38.154 10:43:08 -- scripts/common.sh@335 -- # IFS=.-: 00:15:38.154 10:43:08 -- scripts/common.sh@335 -- # read -ra ver1 00:15:38.154 10:43:08 -- scripts/common.sh@336 -- # IFS=.-: 00:15:38.154 10:43:08 -- scripts/common.sh@336 -- # read -ra ver2 00:15:38.154 10:43:08 -- scripts/common.sh@337 -- # local 'op=<' 00:15:38.154 10:43:08 -- scripts/common.sh@339 -- # ver1_l=2 00:15:38.154 10:43:08 -- scripts/common.sh@340 -- # ver2_l=1 00:15:38.154 10:43:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:38.154 10:43:08 -- scripts/common.sh@343 -- # case "$op" in 00:15:38.154 10:43:08 -- scripts/common.sh@344 -- # : 1 00:15:38.154 10:43:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:38.154 10:43:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:38.154 10:43:08 -- scripts/common.sh@364 -- # decimal 1 00:15:38.154 10:43:08 -- scripts/common.sh@352 -- # local d=1 00:15:38.154 10:43:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:38.154 10:43:08 -- scripts/common.sh@354 -- # echo 1 00:15:38.154 10:43:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:38.154 10:43:08 -- scripts/common.sh@365 -- # decimal 2 00:15:38.154 10:43:08 -- scripts/common.sh@352 -- # local d=2 00:15:38.154 10:43:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:38.154 10:43:08 -- scripts/common.sh@354 -- # echo 2 00:15:38.154 10:43:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:38.154 10:43:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:38.154 10:43:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:38.154 10:43:08 -- scripts/common.sh@367 -- # return 0 00:15:38.154 10:43:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:38.154 10:43:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:38.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.154 --rc genhtml_branch_coverage=1 00:15:38.154 --rc genhtml_function_coverage=1 00:15:38.154 --rc genhtml_legend=1 00:15:38.154 --rc geninfo_all_blocks=1 00:15:38.154 --rc geninfo_unexecuted_blocks=1 00:15:38.154 00:15:38.154 ' 00:15:38.154 10:43:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:38.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.154 --rc genhtml_branch_coverage=1 00:15:38.154 --rc genhtml_function_coverage=1 00:15:38.154 --rc genhtml_legend=1 00:15:38.154 --rc geninfo_all_blocks=1 00:15:38.154 --rc geninfo_unexecuted_blocks=1 00:15:38.154 00:15:38.154 ' 00:15:38.154 10:43:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:38.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.154 --rc genhtml_branch_coverage=1 00:15:38.154 --rc genhtml_function_coverage=1 00:15:38.154 --rc genhtml_legend=1 00:15:38.154 --rc geninfo_all_blocks=1 00:15:38.154 --rc geninfo_unexecuted_blocks=1 00:15:38.154 00:15:38.154 ' 00:15:38.154 10:43:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:38.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.154 --rc genhtml_branch_coverage=1 00:15:38.154 --rc genhtml_function_coverage=1 00:15:38.154 --rc genhtml_legend=1 00:15:38.154 --rc geninfo_all_blocks=1 00:15:38.154 --rc geninfo_unexecuted_blocks=1 00:15:38.154 00:15:38.154 ' 00:15:38.154 10:43:08 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:38.154 10:43:08 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:15:38.154 10:43:08 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:38.154 10:43:08 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:38.154 10:43:08 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:38.154 10:43:08 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:38.154 10:43:08 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:38.154 10:43:08 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:38.154 10:43:08 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:38.154 10:43:08 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.154 10:43:08 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.154 10:43:08 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:38.154 10:43:08 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:38.154 10:43:08 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:38.154 10:43:08 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:38.154 10:43:08 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:38.154 10:43:08 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:38.154 10:43:08 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.154 10:43:08 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.154 10:43:08 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:38.154 10:43:08 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:38.154 10:43:08 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:38.154 10:43:08 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:38.154 10:43:08 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:38.154 10:43:08 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:38.154 10:43:08 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:38.154 10:43:08 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:38.154 10:43:08 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:38.154 10:43:08 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:38.154 10:43:08 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:38.154 10:43:08 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:15:38.154 10:43:08 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:15:38.154 10:43:08 -- ftl/trim.sh@25 -- # timeout=240 00:15:38.154 10:43:08 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:15:38.154 10:43:08 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:15:38.154 10:43:08 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:15:38.154 10:43:08 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:15:38.154 10:43:08 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:15:38.154 10:43:08 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:38.154 10:43:08 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:38.154 10:43:08 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:38.154 10:43:08 -- ftl/trim.sh@40 -- # svcpid=71950 00:15:38.154 10:43:08 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:38.154 10:43:08 -- ftl/trim.sh@41 -- # waitforlisten 71950 00:15:38.154 10:43:08 -- common/autotest_common.sh@829 -- # '[' -z 71950 ']' 00:15:38.154 10:43:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.155 10:43:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.155 10:43:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.155 10:43:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.155 10:43:08 -- common/autotest_common.sh@10 -- # set +x 00:15:38.155 [2024-12-03 10:43:08.716954] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:38.155 [2024-12-03 10:43:08.717275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71950 ] 00:15:38.414 [2024-12-03 10:43:08.866438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:38.672 [2024-12-03 10:43:09.033422] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:38.672 [2024-12-03 10:43:09.033978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.672 [2024-12-03 10:43:09.034281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.672 [2024-12-03 10:43:09.034305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.608 10:43:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.608 10:43:10 -- common/autotest_common.sh@862 -- # return 0 00:15:39.608 10:43:10 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:39.608 10:43:10 -- ftl/common.sh@54 -- # local name=nvme0 00:15:39.608 10:43:10 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:39.608 10:43:10 -- ftl/common.sh@56 -- # local size=103424 00:15:39.608 10:43:10 -- ftl/common.sh@59 -- # local base_bdev 00:15:39.608 10:43:10 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:40.174 10:43:10 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:40.174 10:43:10 -- ftl/common.sh@62 -- # local base_size 00:15:40.175 10:43:10 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:40.175 10:43:10 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:40.175 10:43:10 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:40.175 10:43:10 -- common/autotest_common.sh@1369 -- # local bs 00:15:40.175 10:43:10 -- common/autotest_common.sh@1370 -- # local nb 00:15:40.175 10:43:10 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:40.175 10:43:10 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:40.175 { 00:15:40.175 "name": "nvme0n1", 00:15:40.175 "aliases": [ 00:15:40.175 "a47391ea-0244-45b5-8d63-436ac62ba94b" 00:15:40.175 ], 00:15:40.175 "product_name": "NVMe disk", 00:15:40.175 "block_size": 4096, 00:15:40.175 "num_blocks": 1310720, 00:15:40.175 "uuid": "a47391ea-0244-45b5-8d63-436ac62ba94b", 00:15:40.175 "assigned_rate_limits": { 00:15:40.175 "rw_ios_per_sec": 0, 00:15:40.175 "rw_mbytes_per_sec": 0, 00:15:40.175 "r_mbytes_per_sec": 0, 00:15:40.175 "w_mbytes_per_sec": 0 00:15:40.175 }, 00:15:40.175 "claimed": true, 00:15:40.175 "claim_type": "read_many_write_one", 00:15:40.175 "zoned": false, 00:15:40.175 "supported_io_types": { 00:15:40.175 "read": true, 00:15:40.175 "write": true, 00:15:40.175 "unmap": true, 00:15:40.175 "write_zeroes": true, 00:15:40.175 "flush": true, 00:15:40.175 "reset": true, 00:15:40.175 "compare": true, 00:15:40.175 "compare_and_write": false, 00:15:40.175 "abort": true, 00:15:40.175 "nvme_admin": true, 00:15:40.175 "nvme_io": true 00:15:40.175 }, 00:15:40.175 "driver_specific": { 00:15:40.175 "nvme": [ 00:15:40.175 { 00:15:40.175 "pci_address": "0000:00:07.0", 00:15:40.175 "trid": { 00:15:40.175 "trtype": "PCIe", 00:15:40.175 "traddr": "0000:00:07.0" 00:15:40.175 }, 00:15:40.175 "ctrlr_data": { 00:15:40.175 "cntlid": 0, 00:15:40.175 "vendor_id": "0x1b36", 00:15:40.175 "model_number": "QEMU NVMe Ctrl", 00:15:40.175 "serial_number": "12341", 00:15:40.175 "firmware_revision": "8.0.0", 00:15:40.175 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:40.175 "oacs": { 00:15:40.175 "security": 0, 00:15:40.175 "format": 1, 00:15:40.175 "firmware": 0, 00:15:40.175 "ns_manage": 1 00:15:40.175 }, 00:15:40.175 "multi_ctrlr": false, 00:15:40.175 "ana_reporting": false 00:15:40.175 }, 00:15:40.175 "vs": { 00:15:40.175 "nvme_version": "1.4" 00:15:40.175 }, 00:15:40.175 "ns_data": { 00:15:40.175 "id": 1, 00:15:40.175 "can_share": false 00:15:40.175 } 00:15:40.175 } 00:15:40.175 ], 00:15:40.175 "mp_policy": "active_passive" 00:15:40.175 } 00:15:40.175 } 00:15:40.175 ]' 00:15:40.175 10:43:10 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:40.175 10:43:10 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:40.175 10:43:10 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:40.175 10:43:10 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:40.175 10:43:10 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:40.175 10:43:10 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:40.175 10:43:10 -- ftl/common.sh@63 -- # base_size=5120 00:15:40.175 10:43:10 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:40.175 10:43:10 -- ftl/common.sh@67 -- # clear_lvols 00:15:40.175 10:43:10 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:40.175 10:43:10 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:40.433 10:43:10 -- ftl/common.sh@28 -- # stores=412df3f0-48f3-49b5-a204-e35644742869 00:15:40.433 10:43:10 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:40.433 10:43:10 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 412df3f0-48f3-49b5-a204-e35644742869 00:15:40.693 10:43:11 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:40.951 10:43:11 -- ftl/common.sh@68 -- # lvs=9915394c-9cf1-49d3-b957-c34d6cfaf70b 00:15:40.951 10:43:11 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9915394c-9cf1-49d3-b957-c34d6cfaf70b 00:15:40.951 10:43:11 -- ftl/trim.sh@43 -- # split_bdev=59d596ab-36e9-4eb0-acde-a44161d4e67c 00:15:40.951 10:43:11 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 59d596ab-36e9-4eb0-acde-a44161d4e67c 00:15:40.951 10:43:11 -- ftl/common.sh@35 -- # local name=nvc0 00:15:40.951 10:43:11 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:40.951 10:43:11 -- ftl/common.sh@37 -- # local base_bdev=59d596ab-36e9-4eb0-acde-a44161d4e67c 00:15:40.951 10:43:11 -- ftl/common.sh@38 -- # local cache_size= 00:15:40.951 10:43:11 -- ftl/common.sh@41 -- # get_bdev_size 59d596ab-36e9-4eb0-acde-a44161d4e67c 00:15:40.951 10:43:11 -- common/autotest_common.sh@1367 -- # local bdev_name=59d596ab-36e9-4eb0-acde-a44161d4e67c 00:15:40.951 10:43:11 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:40.951 10:43:11 -- common/autotest_common.sh@1369 -- # local bs 00:15:40.951 10:43:11 -- common/autotest_common.sh@1370 -- # local nb 00:15:40.951 10:43:11 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 59d596ab-36e9-4eb0-acde-a44161d4e67c 00:15:41.210 10:43:11 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:41.210 { 00:15:41.210 "name": "59d596ab-36e9-4eb0-acde-a44161d4e67c", 00:15:41.210 "aliases": [ 00:15:41.210 "lvs/nvme0n1p0" 00:15:41.210 ], 00:15:41.210 "product_name": "Logical Volume", 00:15:41.210 "block_size": 4096, 00:15:41.210 "num_blocks": 26476544, 00:15:41.210 "uuid": "59d596ab-36e9-4eb0-acde-a44161d4e67c", 00:15:41.210 "assigned_rate_limits": { 00:15:41.210 "rw_ios_per_sec": 0, 00:15:41.210 "rw_mbytes_per_sec": 0, 00:15:41.210 "r_mbytes_per_sec": 0, 00:15:41.210 "w_mbytes_per_sec": 0 00:15:41.210 }, 00:15:41.210 "claimed": false, 00:15:41.210 "zoned": false, 00:15:41.210 "supported_io_types": { 00:15:41.210 "read": true, 00:15:41.210 "write": true, 00:15:41.210 "unmap": true, 00:15:41.210 "write_zeroes": true, 00:15:41.210 "flush": false, 00:15:41.210 "reset": true, 00:15:41.210 "compare": false, 00:15:41.210 "compare_and_write": false, 00:15:41.210 "abort": false, 00:15:41.210 "nvme_admin": false, 00:15:41.210 "nvme_io": false 00:15:41.210 }, 00:15:41.210 "driver_specific": { 00:15:41.210 "lvol": { 00:15:41.210 "lvol_store_uuid": "9915394c-9cf1-49d3-b957-c34d6cfaf70b", 00:15:41.210 "base_bdev": "nvme0n1", 00:15:41.210 "thin_provision": true, 00:15:41.210 "snapshot": false, 00:15:41.210 "clone": false, 00:15:41.210 "esnap_clone": false 00:15:41.210 } 00:15:41.210 } 00:15:41.210 } 00:15:41.210 ]' 00:15:41.210 10:43:11 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:41.210 10:43:11 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:41.210 10:43:11 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:41.210 10:43:11 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:41.210 10:43:11 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:41.210 10:43:11 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:41.210 10:43:11 -- ftl/common.sh@41 -- # local base_size=5171 00:15:41.210 10:43:11 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:41.210 10:43:11 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:41.469 10:43:11 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:41.469 10:43:11 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:41.469 10:43:11 -- ftl/common.sh@48 -- # get_bdev_size 59d596ab-36e9-4eb0-acde-a44161d4e67c 00:15:41.469 10:43:11 -- common/autotest_common.sh@1367 -- # local bdev_name=59d596ab-36e9-4eb0-acde-a44161d4e67c 00:15:41.469 10:43:12 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:41.469 10:43:12 -- common/autotest_common.sh@1369 -- # local bs 00:15:41.469 10:43:12 -- common/autotest_common.sh@1370 -- # local nb 00:15:41.469 10:43:12 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 59d596ab-36e9-4eb0-acde-a44161d4e67c 00:15:41.728 10:43:12 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:41.728 { 00:15:41.728 "name": "59d596ab-36e9-4eb0-acde-a44161d4e67c", 00:15:41.728 "aliases": [ 00:15:41.728 "lvs/nvme0n1p0" 00:15:41.728 ], 00:15:41.728 "product_name": "Logical Volume", 00:15:41.728 "block_size": 4096, 00:15:41.728 "num_blocks": 26476544, 00:15:41.728 "uuid": "59d596ab-36e9-4eb0-acde-a44161d4e67c", 00:15:41.728 "assigned_rate_limits": { 00:15:41.728 "rw_ios_per_sec": 0, 00:15:41.728 "rw_mbytes_per_sec": 0, 00:15:41.728 "r_mbytes_per_sec": 0, 00:15:41.728 "w_mbytes_per_sec": 0 00:15:41.728 }, 00:15:41.728 "claimed": false, 00:15:41.728 "zoned": false, 00:15:41.728 "supported_io_types": { 00:15:41.728 "read": true, 00:15:41.728 "write": true, 00:15:41.728 "unmap": true, 00:15:41.728 "write_zeroes": true, 00:15:41.728 "flush": false, 00:15:41.728 "reset": true, 00:15:41.728 "compare": false, 00:15:41.728 "compare_and_write": false, 00:15:41.728 "abort": false, 00:15:41.728 "nvme_admin": false, 00:15:41.728 "nvme_io": false 00:15:41.728 }, 00:15:41.728 "driver_specific": { 00:15:41.728 "lvol": { 00:15:41.728 "lvol_store_uuid": "9915394c-9cf1-49d3-b957-c34d6cfaf70b", 00:15:41.728 "base_bdev": "nvme0n1", 00:15:41.728 "thin_provision": true, 00:15:41.728 "snapshot": false, 00:15:41.728 "clone": false, 00:15:41.728 "esnap_clone": false 00:15:41.728 } 00:15:41.728 } 00:15:41.728 } 00:15:41.728 ]' 00:15:41.728 10:43:12 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:41.728 10:43:12 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:41.728 10:43:12 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:41.728 10:43:12 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:41.728 10:43:12 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:41.728 10:43:12 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:41.728 10:43:12 -- ftl/common.sh@48 -- # cache_size=5171 00:15:41.728 10:43:12 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:41.986 10:43:12 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:15:41.986 10:43:12 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:15:41.986 10:43:12 -- ftl/trim.sh@47 -- # get_bdev_size 59d596ab-36e9-4eb0-acde-a44161d4e67c 00:15:41.986 10:43:12 -- common/autotest_common.sh@1367 -- # local bdev_name=59d596ab-36e9-4eb0-acde-a44161d4e67c 00:15:41.986 10:43:12 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:41.986 10:43:12 -- common/autotest_common.sh@1369 -- # local bs 00:15:41.986 10:43:12 -- common/autotest_common.sh@1370 -- # local nb 00:15:41.986 10:43:12 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 59d596ab-36e9-4eb0-acde-a44161d4e67c 00:15:42.245 10:43:12 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:42.245 { 00:15:42.245 "name": "59d596ab-36e9-4eb0-acde-a44161d4e67c", 00:15:42.245 "aliases": [ 00:15:42.245 "lvs/nvme0n1p0" 00:15:42.245 ], 00:15:42.245 "product_name": "Logical Volume", 00:15:42.245 "block_size": 4096, 00:15:42.245 "num_blocks": 26476544, 00:15:42.245 "uuid": "59d596ab-36e9-4eb0-acde-a44161d4e67c", 00:15:42.245 "assigned_rate_limits": { 00:15:42.245 "rw_ios_per_sec": 0, 00:15:42.245 "rw_mbytes_per_sec": 0, 00:15:42.245 "r_mbytes_per_sec": 0, 00:15:42.245 "w_mbytes_per_sec": 0 00:15:42.245 }, 00:15:42.245 "claimed": false, 00:15:42.245 "zoned": false, 00:15:42.245 "supported_io_types": { 00:15:42.245 "read": true, 00:15:42.245 "write": true, 00:15:42.245 "unmap": true, 00:15:42.245 "write_zeroes": true, 00:15:42.245 "flush": false, 00:15:42.245 "reset": true, 00:15:42.245 "compare": false, 00:15:42.245 "compare_and_write": false, 00:15:42.245 "abort": false, 00:15:42.245 "nvme_admin": false, 00:15:42.245 "nvme_io": false 00:15:42.245 }, 00:15:42.245 "driver_specific": { 00:15:42.245 "lvol": { 00:15:42.245 "lvol_store_uuid": "9915394c-9cf1-49d3-b957-c34d6cfaf70b", 00:15:42.245 "base_bdev": "nvme0n1", 00:15:42.245 "thin_provision": true, 00:15:42.245 "snapshot": false, 00:15:42.245 "clone": false, 00:15:42.245 "esnap_clone": false 00:15:42.245 } 00:15:42.245 } 00:15:42.245 } 00:15:42.245 ]' 00:15:42.245 10:43:12 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:42.245 10:43:12 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:42.245 10:43:12 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:42.245 10:43:12 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:42.245 10:43:12 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:42.245 10:43:12 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:42.245 10:43:12 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:15:42.245 10:43:12 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 59d596ab-36e9-4eb0-acde-a44161d4e67c -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:15:42.245 [2024-12-03 10:43:12.850695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.245 [2024-12-03 10:43:12.850735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:42.245 [2024-12-03 10:43:12.850750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:42.245 [2024-12-03 10:43:12.850757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.245 [2024-12-03 10:43:12.853101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.245 [2024-12-03 10:43:12.853129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:42.245 [2024-12-03 10:43:12.853139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.317 ms 00:15:42.245 [2024-12-03 10:43:12.853146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.245 [2024-12-03 10:43:12.853213] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:42.245 [2024-12-03 10:43:12.853767] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:42.245 [2024-12-03 10:43:12.853793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.245 [2024-12-03 10:43:12.853801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:42.245 [2024-12-03 10:43:12.853809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:15:42.245 [2024-12-03 10:43:12.853815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.245 [2024-12-03 10:43:12.853909] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d105f200-e666-43cc-861e-bc4bc7b990de 00:15:42.245 [2024-12-03 10:43:12.855138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.245 [2024-12-03 10:43:12.855166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:42.245 [2024-12-03 10:43:12.855174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:15:42.245 [2024-12-03 10:43:12.855183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.506 [2024-12-03 10:43:12.861822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.506 [2024-12-03 10:43:12.861847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:42.506 [2024-12-03 10:43:12.861855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.573 ms 00:15:42.506 [2024-12-03 10:43:12.861863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.506 [2024-12-03 10:43:12.861967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.506 [2024-12-03 10:43:12.861978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:42.506 [2024-12-03 10:43:12.861985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:42.506 [2024-12-03 10:43:12.861996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.506 [2024-12-03 10:43:12.862026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.506 [2024-12-03 10:43:12.862035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:42.506 [2024-12-03 10:43:12.862042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:42.506 [2024-12-03 10:43:12.862049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.506 [2024-12-03 10:43:12.862097] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:42.506 [2024-12-03 10:43:12.865346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.506 [2024-12-03 10:43:12.865369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:42.506 [2024-12-03 10:43:12.865379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.253 ms 00:15:42.506 [2024-12-03 10:43:12.865385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.506 [2024-12-03 10:43:12.865441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.506 [2024-12-03 10:43:12.865449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:42.506 [2024-12-03 10:43:12.865457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:42.506 [2024-12-03 10:43:12.865463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.506 [2024-12-03 10:43:12.865493] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:42.506 [2024-12-03 10:43:12.865583] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:42.506 [2024-12-03 10:43:12.865596] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:42.506 [2024-12-03 10:43:12.865604] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:42.506 [2024-12-03 10:43:12.865613] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:42.506 [2024-12-03 10:43:12.865621] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:42.506 [2024-12-03 10:43:12.865631] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:42.506 [2024-12-03 10:43:12.865637] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:42.506 [2024-12-03 10:43:12.865645] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:42.506 [2024-12-03 10:43:12.865651] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:42.506 [2024-12-03 10:43:12.865659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.506 [2024-12-03 10:43:12.865665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:42.506 [2024-12-03 10:43:12.865672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:15:42.506 [2024-12-03 10:43:12.865678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.506 [2024-12-03 10:43:12.865740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.506 [2024-12-03 10:43:12.865746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:42.506 [2024-12-03 10:43:12.865755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:42.506 [2024-12-03 10:43:12.865760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.506 [2024-12-03 10:43:12.865844] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:42.506 [2024-12-03 10:43:12.865852] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:42.506 [2024-12-03 10:43:12.865859] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:42.506 [2024-12-03 10:43:12.865866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:42.506 [2024-12-03 10:43:12.865873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:42.506 [2024-12-03 10:43:12.865878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:42.506 [2024-12-03 10:43:12.865885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:42.506 [2024-12-03 10:43:12.865890] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:42.506 [2024-12-03 10:43:12.865897] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:42.506 [2024-12-03 10:43:12.865902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:42.506 [2024-12-03 10:43:12.865908] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:42.506 [2024-12-03 10:43:12.865915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:42.506 [2024-12-03 10:43:12.865921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:42.506 [2024-12-03 10:43:12.865926] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:42.506 [2024-12-03 10:43:12.865934] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:42.506 [2024-12-03 10:43:12.865939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:42.506 [2024-12-03 10:43:12.865946] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:42.506 [2024-12-03 10:43:12.865951] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:42.506 [2024-12-03 10:43:12.865957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:42.506 [2024-12-03 10:43:12.865965] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:42.506 [2024-12-03 10:43:12.865971] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:42.506 [2024-12-03 10:43:12.865977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:42.506 [2024-12-03 10:43:12.865983] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:42.506 [2024-12-03 10:43:12.865988] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:42.506 [2024-12-03 10:43:12.865995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:42.506 [2024-12-03 10:43:12.865999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:42.507 [2024-12-03 10:43:12.866006] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:42.507 [2024-12-03 10:43:12.866010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:42.507 [2024-12-03 10:43:12.866017] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:42.507 [2024-12-03 10:43:12.866021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:42.507 [2024-12-03 10:43:12.866028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:42.507 [2024-12-03 10:43:12.866032] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:42.507 [2024-12-03 10:43:12.866041] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:42.507 [2024-12-03 10:43:12.866046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:42.507 [2024-12-03 10:43:12.866070] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:42.507 [2024-12-03 10:43:12.866076] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:42.507 [2024-12-03 10:43:12.866084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:42.507 [2024-12-03 10:43:12.866088] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:42.507 [2024-12-03 10:43:12.866095] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:42.507 [2024-12-03 10:43:12.866100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:42.507 [2024-12-03 10:43:12.866107] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:42.507 [2024-12-03 10:43:12.866113] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:42.507 [2024-12-03 10:43:12.866120] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:42.507 [2024-12-03 10:43:12.866126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:42.507 [2024-12-03 10:43:12.866135] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:42.507 [2024-12-03 10:43:12.866140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:42.507 [2024-12-03 10:43:12.866147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:42.507 [2024-12-03 10:43:12.866152] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:42.507 [2024-12-03 10:43:12.866160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:42.507 [2024-12-03 10:43:12.866165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:42.507 [2024-12-03 10:43:12.866174] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:42.507 [2024-12-03 10:43:12.866182] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:42.507 [2024-12-03 10:43:12.866190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:42.507 [2024-12-03 10:43:12.866196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:42.507 [2024-12-03 10:43:12.866203] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:42.507 [2024-12-03 10:43:12.866209] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:42.507 [2024-12-03 10:43:12.866215] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:42.507 [2024-12-03 10:43:12.866221] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:42.507 [2024-12-03 10:43:12.866228] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:42.507 [2024-12-03 10:43:12.866233] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:42.507 [2024-12-03 10:43:12.866239] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:42.507 [2024-12-03 10:43:12.866245] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:42.507 [2024-12-03 10:43:12.866252] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:42.507 [2024-12-03 10:43:12.866258] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:42.507 [2024-12-03 10:43:12.866268] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:42.507 [2024-12-03 10:43:12.866273] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:42.507 [2024-12-03 10:43:12.866281] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:42.507 [2024-12-03 10:43:12.866287] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:42.507 [2024-12-03 10:43:12.866294] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:42.507 [2024-12-03 10:43:12.866299] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:42.507 [2024-12-03 10:43:12.866306] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:42.507 [2024-12-03 10:43:12.866311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.507 [2024-12-03 10:43:12.866318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:42.507 [2024-12-03 10:43:12.866324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:15:42.507 [2024-12-03 10:43:12.866331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.507 [2024-12-03 10:43:12.880275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.507 [2024-12-03 10:43:12.880375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:42.507 [2024-12-03 10:43:12.880416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.871 ms 00:15:42.507 [2024-12-03 10:43:12.880438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.507 [2024-12-03 10:43:12.880554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.507 [2024-12-03 10:43:12.880578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:42.507 [2024-12-03 10:43:12.880598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:15:42.507 [2024-12-03 10:43:12.880616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.507 [2024-12-03 10:43:12.908385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.507 [2024-12-03 10:43:12.908490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:42.507 [2024-12-03 10:43:12.908530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.728 ms 00:15:42.507 [2024-12-03 10:43:12.908550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.507 [2024-12-03 10:43:12.908611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.507 [2024-12-03 10:43:12.908653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:42.507 [2024-12-03 10:43:12.908673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:42.507 [2024-12-03 10:43:12.908692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.507 [2024-12-03 10:43:12.909164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.507 [2024-12-03 10:43:12.909242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:42.507 [2024-12-03 10:43:12.909280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:15:42.507 [2024-12-03 10:43:12.909300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.507 [2024-12-03 10:43:12.909415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.507 [2024-12-03 10:43:12.909487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:42.507 [2024-12-03 10:43:12.909506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:15:42.507 [2024-12-03 10:43:12.909522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.507 [2024-12-03 10:43:12.933370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.507 [2024-12-03 10:43:12.933539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:42.507 [2024-12-03 10:43:12.933619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.806 ms 00:15:42.507 [2024-12-03 10:43:12.933656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.507 [2024-12-03 10:43:12.945811] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:42.507 [2024-12-03 10:43:12.961001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.507 [2024-12-03 10:43:12.961103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:42.507 [2024-12-03 10:43:12.961146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.178 ms 00:15:42.507 [2024-12-03 10:43:12.961164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.507 [2024-12-03 10:43:13.035630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.507 [2024-12-03 10:43:13.035742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:42.507 [2024-12-03 10:43:13.035811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.395 ms 00:15:42.507 [2024-12-03 10:43:13.035831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.507 [2024-12-03 10:43:13.035906] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:42.507 [2024-12-03 10:43:13.036218] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:45.055 [2024-12-03 10:43:15.638865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.055 [2024-12-03 10:43:15.639133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:45.055 [2024-12-03 10:43:15.639235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2602.942 ms 00:15:45.055 [2024-12-03 10:43:15.639266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.055 [2024-12-03 10:43:15.639521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.055 [2024-12-03 10:43:15.639556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:45.055 [2024-12-03 10:43:15.639781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:15:45.055 [2024-12-03 10:43:15.639812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.055 [2024-12-03 10:43:15.664051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.055 [2024-12-03 10:43:15.664092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:45.055 [2024-12-03 10:43:15.664123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.185 ms 00:15:45.055 [2024-12-03 10:43:15.664133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.315 [2024-12-03 10:43:15.686571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.315 [2024-12-03 10:43:15.686600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:45.315 [2024-12-03 10:43:15.686616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.374 ms 00:15:45.315 [2024-12-03 10:43:15.686623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.315 [2024-12-03 10:43:15.686971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.315 [2024-12-03 10:43:15.686989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:45.315 [2024-12-03 10:43:15.687000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:15:45.315 [2024-12-03 10:43:15.687009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.315 [2024-12-03 10:43:15.750433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.315 [2024-12-03 10:43:15.750464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:45.315 [2024-12-03 10:43:15.750477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.389 ms 00:15:45.315 [2024-12-03 10:43:15.750485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.315 [2024-12-03 10:43:15.775111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.315 [2024-12-03 10:43:15.775241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:45.315 [2024-12-03 10:43:15.775262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.553 ms 00:15:45.315 [2024-12-03 10:43:15.775270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.315 [2024-12-03 10:43:15.779598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.315 [2024-12-03 10:43:15.779646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:45.315 [2024-12-03 10:43:15.779660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.270 ms 00:15:45.315 [2024-12-03 10:43:15.779667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.315 [2024-12-03 10:43:15.802957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.315 [2024-12-03 10:43:15.803105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:45.315 [2024-12-03 10:43:15.803124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.230 ms 00:15:45.315 [2024-12-03 10:43:15.803131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.315 [2024-12-03 10:43:15.803270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.315 [2024-12-03 10:43:15.803287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:45.315 [2024-12-03 10:43:15.803299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:45.315 [2024-12-03 10:43:15.803306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.315 [2024-12-03 10:43:15.803395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.315 [2024-12-03 10:43:15.803419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:45.315 [2024-12-03 10:43:15.803429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:15:45.315 [2024-12-03 10:43:15.803436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.315 [2024-12-03 10:43:15.804306] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:45.315 { 00:15:45.315 "name": "ftl0", 00:15:45.315 "uuid": "d105f200-e666-43cc-861e-bc4bc7b990de" 00:15:45.315 } 00:15:45.315 [2024-12-03 10:43:15.807415] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2953.298 ms, result 0 00:15:45.315 [2024-12-03 10:43:15.808285] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:45.315 10:43:15 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:15:45.315 10:43:15 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:15:45.315 10:43:15 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:45.315 10:43:15 -- common/autotest_common.sh@899 -- # local i 00:15:45.315 10:43:15 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:45.315 10:43:15 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:45.315 10:43:15 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:45.576 10:43:16 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:45.576 [ 00:15:45.576 { 00:15:45.576 "name": "ftl0", 00:15:45.576 "aliases": [ 00:15:45.576 "d105f200-e666-43cc-861e-bc4bc7b990de" 00:15:45.576 ], 00:15:45.576 "product_name": "FTL disk", 00:15:45.576 "block_size": 4096, 00:15:45.576 "num_blocks": 23592960, 00:15:45.576 "uuid": "d105f200-e666-43cc-861e-bc4bc7b990de", 00:15:45.576 "assigned_rate_limits": { 00:15:45.576 "rw_ios_per_sec": 0, 00:15:45.576 "rw_mbytes_per_sec": 0, 00:15:45.576 "r_mbytes_per_sec": 0, 00:15:45.576 "w_mbytes_per_sec": 0 00:15:45.576 }, 00:15:45.576 "claimed": false, 00:15:45.576 "zoned": false, 00:15:45.576 "supported_io_types": { 00:15:45.576 "read": true, 00:15:45.576 "write": true, 00:15:45.576 "unmap": true, 00:15:45.576 "write_zeroes": true, 00:15:45.576 "flush": true, 00:15:45.576 "reset": false, 00:15:45.576 "compare": false, 00:15:45.576 "compare_and_write": false, 00:15:45.576 "abort": false, 00:15:45.576 "nvme_admin": false, 00:15:45.576 "nvme_io": false 00:15:45.576 }, 00:15:45.576 "driver_specific": { 00:15:45.576 "ftl": { 00:15:45.576 "base_bdev": "59d596ab-36e9-4eb0-acde-a44161d4e67c", 00:15:45.576 "cache": "nvc0n1p0" 00:15:45.576 } 00:15:45.576 } 00:15:45.576 } 00:15:45.576 ] 00:15:45.835 10:43:16 -- common/autotest_common.sh@905 -- # return 0 00:15:45.835 10:43:16 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:15:45.835 10:43:16 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:45.835 10:43:16 -- ftl/trim.sh@56 -- # echo ']}' 00:15:45.835 10:43:16 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:15:46.093 10:43:16 -- ftl/trim.sh@59 -- # bdev_info='[ 00:15:46.093 { 00:15:46.093 "name": "ftl0", 00:15:46.093 "aliases": [ 00:15:46.093 "d105f200-e666-43cc-861e-bc4bc7b990de" 00:15:46.093 ], 00:15:46.093 "product_name": "FTL disk", 00:15:46.093 "block_size": 4096, 00:15:46.093 "num_blocks": 23592960, 00:15:46.093 "uuid": "d105f200-e666-43cc-861e-bc4bc7b990de", 00:15:46.093 "assigned_rate_limits": { 00:15:46.093 "rw_ios_per_sec": 0, 00:15:46.093 "rw_mbytes_per_sec": 0, 00:15:46.093 "r_mbytes_per_sec": 0, 00:15:46.093 "w_mbytes_per_sec": 0 00:15:46.093 }, 00:15:46.093 "claimed": false, 00:15:46.093 "zoned": false, 00:15:46.093 "supported_io_types": { 00:15:46.093 "read": true, 00:15:46.093 "write": true, 00:15:46.093 "unmap": true, 00:15:46.093 "write_zeroes": true, 00:15:46.093 "flush": true, 00:15:46.093 "reset": false, 00:15:46.093 "compare": false, 00:15:46.093 "compare_and_write": false, 00:15:46.093 "abort": false, 00:15:46.093 "nvme_admin": false, 00:15:46.093 "nvme_io": false 00:15:46.093 }, 00:15:46.093 "driver_specific": { 00:15:46.093 "ftl": { 00:15:46.093 "base_bdev": "59d596ab-36e9-4eb0-acde-a44161d4e67c", 00:15:46.093 "cache": "nvc0n1p0" 00:15:46.093 } 00:15:46.093 } 00:15:46.093 } 00:15:46.093 ]' 00:15:46.093 10:43:16 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:15:46.093 10:43:16 -- ftl/trim.sh@60 -- # nb=23592960 00:15:46.093 10:43:16 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:46.353 [2024-12-03 10:43:16.771444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.353 [2024-12-03 10:43:16.771478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:46.353 [2024-12-03 10:43:16.771487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:46.353 [2024-12-03 10:43:16.771495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.353 [2024-12-03 10:43:16.771527] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:46.353 [2024-12-03 10:43:16.773739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.353 [2024-12-03 10:43:16.773763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:46.353 [2024-12-03 10:43:16.773776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.197 ms 00:15:46.353 [2024-12-03 10:43:16.773782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.353 [2024-12-03 10:43:16.774255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.353 [2024-12-03 10:43:16.774267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:46.353 [2024-12-03 10:43:16.774278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:15:46.353 [2024-12-03 10:43:16.774285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.353 [2024-12-03 10:43:16.777038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.353 [2024-12-03 10:43:16.777063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:46.353 [2024-12-03 10:43:16.777074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.727 ms 00:15:46.353 [2024-12-03 10:43:16.777081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.353 [2024-12-03 10:43:16.782266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.353 [2024-12-03 10:43:16.782376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:46.353 [2024-12-03 10:43:16.782392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.143 ms 00:15:46.353 [2024-12-03 10:43:16.782398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.353 [2024-12-03 10:43:16.800102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.353 [2024-12-03 10:43:16.800127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:46.353 [2024-12-03 10:43:16.800137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.618 ms 00:15:46.353 [2024-12-03 10:43:16.800144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.353 [2024-12-03 10:43:16.812826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.353 [2024-12-03 10:43:16.812852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:46.353 [2024-12-03 10:43:16.812863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.630 ms 00:15:46.353 [2024-12-03 10:43:16.812869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.353 [2024-12-03 10:43:16.813033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.353 [2024-12-03 10:43:16.813041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:46.353 [2024-12-03 10:43:16.813069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:15:46.353 [2024-12-03 10:43:16.813076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.353 [2024-12-03 10:43:16.831044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.353 [2024-12-03 10:43:16.831074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:46.353 [2024-12-03 10:43:16.831084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.941 ms 00:15:46.353 [2024-12-03 10:43:16.831090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.353 [2024-12-03 10:43:16.848982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.353 [2024-12-03 10:43:16.849005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:46.353 [2024-12-03 10:43:16.849014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.831 ms 00:15:46.353 [2024-12-03 10:43:16.849020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.353 [2024-12-03 10:43:16.866303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.353 [2024-12-03 10:43:16.866327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:46.353 [2024-12-03 10:43:16.866337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.214 ms 00:15:46.353 [2024-12-03 10:43:16.866343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.353 [2024-12-03 10:43:16.883563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.353 [2024-12-03 10:43:16.883586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:46.353 [2024-12-03 10:43:16.883599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.124 ms 00:15:46.353 [2024-12-03 10:43:16.883616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.353 [2024-12-03 10:43:16.883664] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:46.353 [2024-12-03 10:43:16.883677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:46.353 [2024-12-03 10:43:16.883809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.883999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:46.354 [2024-12-03 10:43:16.884389] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:46.354 [2024-12-03 10:43:16.884397] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d105f200-e666-43cc-861e-bc4bc7b990de 00:15:46.354 [2024-12-03 10:43:16.884403] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:46.354 [2024-12-03 10:43:16.884410] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:46.354 [2024-12-03 10:43:16.884415] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:46.354 [2024-12-03 10:43:16.884422] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:46.354 [2024-12-03 10:43:16.884428] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:46.354 [2024-12-03 10:43:16.884435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:46.354 [2024-12-03 10:43:16.884441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:46.354 [2024-12-03 10:43:16.884449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:46.354 [2024-12-03 10:43:16.884454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:46.355 [2024-12-03 10:43:16.884460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.355 [2024-12-03 10:43:16.884467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:46.355 [2024-12-03 10:43:16.884475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:15:46.355 [2024-12-03 10:43:16.884480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.355 [2024-12-03 10:43:16.894744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.355 [2024-12-03 10:43:16.894770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:46.355 [2024-12-03 10:43:16.894780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.233 ms 00:15:46.355 [2024-12-03 10:43:16.894786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.355 [2024-12-03 10:43:16.894973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.355 [2024-12-03 10:43:16.894981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:46.355 [2024-12-03 10:43:16.894990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:15:46.355 [2024-12-03 10:43:16.894995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.355 [2024-12-03 10:43:16.931882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:46.355 [2024-12-03 10:43:16.931909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:46.355 [2024-12-03 10:43:16.931921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:46.355 [2024-12-03 10:43:16.931927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.355 [2024-12-03 10:43:16.932012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:46.355 [2024-12-03 10:43:16.932019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:46.355 [2024-12-03 10:43:16.932027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:46.355 [2024-12-03 10:43:16.932033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.355 [2024-12-03 10:43:16.932101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:46.355 [2024-12-03 10:43:16.932109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:46.355 [2024-12-03 10:43:16.932118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:46.355 [2024-12-03 10:43:16.932124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.355 [2024-12-03 10:43:16.932153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:46.355 [2024-12-03 10:43:16.932161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:46.355 [2024-12-03 10:43:16.932169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:46.355 [2024-12-03 10:43:16.932175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.614 [2024-12-03 10:43:17.001434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:46.614 [2024-12-03 10:43:17.001468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:46.614 [2024-12-03 10:43:17.001483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:46.614 [2024-12-03 10:43:17.001490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.614 [2024-12-03 10:43:17.025243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:46.614 [2024-12-03 10:43:17.025270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:46.614 [2024-12-03 10:43:17.025281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:46.614 [2024-12-03 10:43:17.025287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.614 [2024-12-03 10:43:17.025348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:46.614 [2024-12-03 10:43:17.025356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:46.614 [2024-12-03 10:43:17.025364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:46.614 [2024-12-03 10:43:17.025369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.614 [2024-12-03 10:43:17.025416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:46.614 [2024-12-03 10:43:17.025423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:46.614 [2024-12-03 10:43:17.025433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:46.614 [2024-12-03 10:43:17.025452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.614 [2024-12-03 10:43:17.025550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:46.614 [2024-12-03 10:43:17.025557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:46.614 [2024-12-03 10:43:17.025569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:46.614 [2024-12-03 10:43:17.025575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.614 [2024-12-03 10:43:17.025621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:46.614 [2024-12-03 10:43:17.025628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:46.614 [2024-12-03 10:43:17.025638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:46.614 [2024-12-03 10:43:17.025643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.614 [2024-12-03 10:43:17.025689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:46.614 [2024-12-03 10:43:17.025696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:46.614 [2024-12-03 10:43:17.025704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:46.615 [2024-12-03 10:43:17.025711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.615 [2024-12-03 10:43:17.025763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:46.615 [2024-12-03 10:43:17.025770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:46.615 [2024-12-03 10:43:17.025780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:46.615 [2024-12-03 10:43:17.025786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.615 [2024-12-03 10:43:17.025961] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 254.489 ms, result 0 00:15:46.615 true 00:15:46.615 10:43:17 -- ftl/trim.sh@63 -- # killprocess 71950 00:15:46.615 10:43:17 -- common/autotest_common.sh@936 -- # '[' -z 71950 ']' 00:15:46.615 10:43:17 -- common/autotest_common.sh@940 -- # kill -0 71950 00:15:46.615 10:43:17 -- common/autotest_common.sh@941 -- # uname 00:15:46.615 10:43:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:46.615 10:43:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71950 00:15:46.615 10:43:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:46.615 10:43:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:46.615 10:43:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71950' 00:15:46.615 killing process with pid 71950 00:15:46.615 10:43:17 -- common/autotest_common.sh@955 -- # kill 71950 00:15:46.615 10:43:17 -- common/autotest_common.sh@960 -- # wait 71950 00:15:51.904 10:43:22 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:15:52.472 65536+0 records in 00:15:52.472 65536+0 records out 00:15:52.472 268435456 bytes (268 MB, 256 MiB) copied, 0.801846 s, 335 MB/s 00:15:52.472 10:43:23 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:52.730 [2024-12-03 10:43:23.097882] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:52.730 [2024-12-03 10:43:23.098149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72135 ] 00:15:52.730 [2024-12-03 10:43:23.237893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.989 [2024-12-03 10:43:23.409092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.248 [2024-12-03 10:43:23.634727] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:53.248 [2024-12-03 10:43:23.634776] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:53.248 [2024-12-03 10:43:23.782516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.248 [2024-12-03 10:43:23.782552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:53.248 [2024-12-03 10:43:23.782564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:53.248 [2024-12-03 10:43:23.782570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.248 [2024-12-03 10:43:23.784750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.248 [2024-12-03 10:43:23.784777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:53.248 [2024-12-03 10:43:23.784785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:15:53.248 [2024-12-03 10:43:23.784791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.248 [2024-12-03 10:43:23.784850] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:53.248 [2024-12-03 10:43:23.785437] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:53.248 [2024-12-03 10:43:23.785449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.248 [2024-12-03 10:43:23.785455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:53.248 [2024-12-03 10:43:23.785462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:15:53.248 [2024-12-03 10:43:23.785468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.248 [2024-12-03 10:43:23.786766] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:53.248 [2024-12-03 10:43:23.797387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.248 [2024-12-03 10:43:23.797410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:53.248 [2024-12-03 10:43:23.797419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.623 ms 00:15:53.248 [2024-12-03 10:43:23.797425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.249 [2024-12-03 10:43:23.797497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.249 [2024-12-03 10:43:23.797506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:53.249 [2024-12-03 10:43:23.797513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:15:53.249 [2024-12-03 10:43:23.797519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.249 [2024-12-03 10:43:23.803743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.249 [2024-12-03 10:43:23.803762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:53.249 [2024-12-03 10:43:23.803770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.192 ms 00:15:53.249 [2024-12-03 10:43:23.803779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.249 [2024-12-03 10:43:23.803855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.249 [2024-12-03 10:43:23.803863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:53.249 [2024-12-03 10:43:23.803870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:15:53.249 [2024-12-03 10:43:23.803876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.249 [2024-12-03 10:43:23.803895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.249 [2024-12-03 10:43:23.803900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:53.249 [2024-12-03 10:43:23.803906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:53.249 [2024-12-03 10:43:23.803912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.249 [2024-12-03 10:43:23.803938] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:53.249 [2024-12-03 10:43:23.807038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.249 [2024-12-03 10:43:23.807066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:53.249 [2024-12-03 10:43:23.807077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.111 ms 00:15:53.249 [2024-12-03 10:43:23.807085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.249 [2024-12-03 10:43:23.807121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.249 [2024-12-03 10:43:23.807128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:53.249 [2024-12-03 10:43:23.807135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:53.249 [2024-12-03 10:43:23.807141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.249 [2024-12-03 10:43:23.807155] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:53.249 [2024-12-03 10:43:23.807172] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:53.249 [2024-12-03 10:43:23.807199] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:53.249 [2024-12-03 10:43:23.807213] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:53.249 [2024-12-03 10:43:23.807272] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:53.249 [2024-12-03 10:43:23.807281] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:53.249 [2024-12-03 10:43:23.807289] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:53.249 [2024-12-03 10:43:23.807296] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:53.249 [2024-12-03 10:43:23.807304] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:53.249 [2024-12-03 10:43:23.807311] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:53.249 [2024-12-03 10:43:23.807317] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:53.249 [2024-12-03 10:43:23.807322] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:53.249 [2024-12-03 10:43:23.807330] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:53.249 [2024-12-03 10:43:23.807336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.249 [2024-12-03 10:43:23.807342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:53.249 [2024-12-03 10:43:23.807348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:15:53.249 [2024-12-03 10:43:23.807354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.249 [2024-12-03 10:43:23.807408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.249 [2024-12-03 10:43:23.807415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:53.249 [2024-12-03 10:43:23.807426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:15:53.249 [2024-12-03 10:43:23.807432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.249 [2024-12-03 10:43:23.807508] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:53.249 [2024-12-03 10:43:23.807517] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:53.249 [2024-12-03 10:43:23.807528] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:53.249 [2024-12-03 10:43:23.807535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:53.249 [2024-12-03 10:43:23.807540] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:53.249 [2024-12-03 10:43:23.807551] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:53.249 [2024-12-03 10:43:23.807556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:53.249 [2024-12-03 10:43:23.807567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:53.249 [2024-12-03 10:43:23.807573] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:53.249 [2024-12-03 10:43:23.807582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:53.249 [2024-12-03 10:43:23.807588] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:53.249 [2024-12-03 10:43:23.807612] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:53.249 [2024-12-03 10:43:23.807618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:53.249 [2024-12-03 10:43:23.807629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:53.249 [2024-12-03 10:43:23.807644] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:53.249 [2024-12-03 10:43:23.807652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:53.249 [2024-12-03 10:43:23.807658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:53.249 [2024-12-03 10:43:23.807663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:53.249 [2024-12-03 10:43:23.807668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:53.249 [2024-12-03 10:43:23.807673] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:53.249 [2024-12-03 10:43:23.807679] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:53.249 [2024-12-03 10:43:23.807685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:53.249 [2024-12-03 10:43:23.807690] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:53.249 [2024-12-03 10:43:23.807695] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:53.249 [2024-12-03 10:43:23.807700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:53.249 [2024-12-03 10:43:23.807706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:53.249 [2024-12-03 10:43:23.807711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:53.249 [2024-12-03 10:43:23.807716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:53.249 [2024-12-03 10:43:23.807721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:53.249 [2024-12-03 10:43:23.807726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:53.249 [2024-12-03 10:43:23.807732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:53.249 [2024-12-03 10:43:23.807737] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:53.249 [2024-12-03 10:43:23.807742] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:53.249 [2024-12-03 10:43:23.807746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:53.249 [2024-12-03 10:43:23.807751] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:53.249 [2024-12-03 10:43:23.807756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:53.249 [2024-12-03 10:43:23.807762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:53.249 [2024-12-03 10:43:23.807767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:53.249 [2024-12-03 10:43:23.807772] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:53.249 [2024-12-03 10:43:23.807777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:53.249 [2024-12-03 10:43:23.807781] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:53.249 [2024-12-03 10:43:23.807787] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:53.249 [2024-12-03 10:43:23.807793] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:53.249 [2024-12-03 10:43:23.807801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:53.249 [2024-12-03 10:43:23.807809] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:53.249 [2024-12-03 10:43:23.807814] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:53.249 [2024-12-03 10:43:23.807819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:53.249 [2024-12-03 10:43:23.807825] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:53.249 [2024-12-03 10:43:23.807830] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:53.249 [2024-12-03 10:43:23.807835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:53.249 [2024-12-03 10:43:23.807841] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:53.249 [2024-12-03 10:43:23.807848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:53.249 [2024-12-03 10:43:23.807855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:53.249 [2024-12-03 10:43:23.807861] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:53.249 [2024-12-03 10:43:23.807866] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:53.249 [2024-12-03 10:43:23.807872] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:53.250 [2024-12-03 10:43:23.807878] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:53.250 [2024-12-03 10:43:23.807883] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:53.250 [2024-12-03 10:43:23.807889] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:53.250 [2024-12-03 10:43:23.807894] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:53.250 [2024-12-03 10:43:23.807899] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:53.250 [2024-12-03 10:43:23.807905] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:53.250 [2024-12-03 10:43:23.807910] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:53.250 [2024-12-03 10:43:23.807916] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:53.250 [2024-12-03 10:43:23.807922] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:53.250 [2024-12-03 10:43:23.807927] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:53.250 [2024-12-03 10:43:23.807937] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:53.250 [2024-12-03 10:43:23.807943] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:53.250 [2024-12-03 10:43:23.807949] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:53.250 [2024-12-03 10:43:23.807954] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:53.250 [2024-12-03 10:43:23.807962] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:53.250 [2024-12-03 10:43:23.807968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.250 [2024-12-03 10:43:23.807973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:53.250 [2024-12-03 10:43:23.807979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:15:53.250 [2024-12-03 10:43:23.807984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.250 [2024-12-03 10:43:23.821776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.250 [2024-12-03 10:43:23.821800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:53.250 [2024-12-03 10:43:23.821809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.750 ms 00:15:53.250 [2024-12-03 10:43:23.821816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.250 [2024-12-03 10:43:23.821907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.250 [2024-12-03 10:43:23.821915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:53.250 [2024-12-03 10:43:23.821922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:15:53.250 [2024-12-03 10:43:23.821929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.858988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.859017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:53.509 [2024-12-03 10:43:23.859027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.042 ms 00:15:53.509 [2024-12-03 10:43:23.859035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.859102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.859111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:53.509 [2024-12-03 10:43:23.859121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:53.509 [2024-12-03 10:43:23.859127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.859513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.859527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:53.509 [2024-12-03 10:43:23.859533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:15:53.509 [2024-12-03 10:43:23.859540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.859649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.859657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:53.509 [2024-12-03 10:43:23.859664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:15:53.509 [2024-12-03 10:43:23.859670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.872647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.872668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:53.509 [2024-12-03 10:43:23.872676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.959 ms 00:15:53.509 [2024-12-03 10:43:23.872684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.883449] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:15:53.509 [2024-12-03 10:43:23.883481] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:53.509 [2024-12-03 10:43:23.883491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.883498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:53.509 [2024-12-03 10:43:23.883506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.732 ms 00:15:53.509 [2024-12-03 10:43:23.883512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.902368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.902393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:53.509 [2024-12-03 10:43:23.902406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.755 ms 00:15:53.509 [2024-12-03 10:43:23.902413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.911665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.911692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:53.509 [2024-12-03 10:43:23.911702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.199 ms 00:15:53.509 [2024-12-03 10:43:23.911715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.920808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.920831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:53.509 [2024-12-03 10:43:23.920839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.044 ms 00:15:53.509 [2024-12-03 10:43:23.920845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.921207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.921218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:53.509 [2024-12-03 10:43:23.921226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:15:53.509 [2024-12-03 10:43:23.921231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.969958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.969984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:53.509 [2024-12-03 10:43:23.969994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.708 ms 00:15:53.509 [2024-12-03 10:43:23.970000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.978043] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:53.509 [2024-12-03 10:43:23.992654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.992679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:53.509 [2024-12-03 10:43:23.992689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.584 ms 00:15:53.509 [2024-12-03 10:43:23.992696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.992757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.992765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:53.509 [2024-12-03 10:43:23.992772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:53.509 [2024-12-03 10:43:23.992781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.992821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.992830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:53.509 [2024-12-03 10:43:23.992836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:15:53.509 [2024-12-03 10:43:23.992842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.993888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.509 [2024-12-03 10:43:23.993912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:53.509 [2024-12-03 10:43:23.993919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:15:53.509 [2024-12-03 10:43:23.993925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.509 [2024-12-03 10:43:23.993953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.510 [2024-12-03 10:43:23.993960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:53.510 [2024-12-03 10:43:23.993969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:53.510 [2024-12-03 10:43:23.993975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.510 [2024-12-03 10:43:23.994006] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:53.510 [2024-12-03 10:43:23.994014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.510 [2024-12-03 10:43:23.994021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:53.510 [2024-12-03 10:43:23.994028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:53.510 [2024-12-03 10:43:23.994034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.510 [2024-12-03 10:43:24.013306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.510 [2024-12-03 10:43:24.013334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:53.510 [2024-12-03 10:43:24.013343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.253 ms 00:15:53.510 [2024-12-03 10:43:24.013349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.510 [2024-12-03 10:43:24.013417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.510 [2024-12-03 10:43:24.013425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:53.510 [2024-12-03 10:43:24.013433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:53.510 [2024-12-03 10:43:24.013439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.510 [2024-12-03 10:43:24.014355] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:53.510 [2024-12-03 10:43:24.016805] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 231.554 ms, result 0 00:15:53.510 [2024-12-03 10:43:24.017723] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:53.510 [2024-12-03 10:43:24.028821] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:54.444  [2024-12-03T10:43:26.433Z] Copying: 14/256 [MB] (14 MBps) [2024-12-03T10:43:27.373Z] Copying: 26/256 [MB] (11 MBps) [2024-12-03T10:43:28.311Z] Copying: 40/256 [MB] (14 MBps) [2024-12-03T10:43:29.251Z] Copying: 67/256 [MB] (26 MBps) [2024-12-03T10:43:30.186Z] Copying: 84/256 [MB] (17 MBps) [2024-12-03T10:43:31.119Z] Copying: 102/256 [MB] (17 MBps) [2024-12-03T10:43:32.056Z] Copying: 123/256 [MB] (20 MBps) [2024-12-03T10:43:33.438Z] Copying: 143/256 [MB] (20 MBps) [2024-12-03T10:43:34.381Z] Copying: 162/256 [MB] (18 MBps) [2024-12-03T10:43:35.322Z] Copying: 174/256 [MB] (12 MBps) [2024-12-03T10:43:36.262Z] Copying: 186/256 [MB] (11 MBps) [2024-12-03T10:43:37.212Z] Copying: 198/256 [MB] (11 MBps) [2024-12-03T10:43:38.147Z] Copying: 210/256 [MB] (11 MBps) [2024-12-03T10:43:39.087Z] Copying: 223/256 [MB] (12 MBps) [2024-12-03T10:43:40.465Z] Copying: 234/256 [MB] (11 MBps) [2024-12-03T10:43:41.032Z] Copying: 245/256 [MB] (10 MBps) [2024-12-03T10:43:41.032Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-03 10:43:41.010353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:10.419 [2024-12-03 10:43:41.017982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.419 [2024-12-03 10:43:41.018010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:10.419 [2024-12-03 10:43:41.018030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:10.419 [2024-12-03 10:43:41.018036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.419 [2024-12-03 10:43:41.018063] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:10.419 [2024-12-03 10:43:41.020296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.419 [2024-12-03 10:43:41.020318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:10.419 [2024-12-03 10:43:41.020327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.222 ms 00:16:10.419 [2024-12-03 10:43:41.020334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.419 [2024-12-03 10:43:41.022870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.419 [2024-12-03 10:43:41.022893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:10.419 [2024-12-03 10:43:41.022901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.508 ms 00:16:10.419 [2024-12-03 10:43:41.022907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.419 [2024-12-03 10:43:41.029246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.419 [2024-12-03 10:43:41.029267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:10.419 [2024-12-03 10:43:41.029275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.321 ms 00:16:10.419 [2024-12-03 10:43:41.029280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.712 [2024-12-03 10:43:41.034490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.712 [2024-12-03 10:43:41.034509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:10.712 [2024-12-03 10:43:41.034518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.185 ms 00:16:10.712 [2024-12-03 10:43:41.034524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.712 [2024-12-03 10:43:41.052897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.712 [2024-12-03 10:43:41.052920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:10.712 [2024-12-03 10:43:41.052929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.323 ms 00:16:10.712 [2024-12-03 10:43:41.052935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.712 [2024-12-03 10:43:41.064840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.712 [2024-12-03 10:43:41.064864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:10.712 [2024-12-03 10:43:41.064873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.869 ms 00:16:10.712 [2024-12-03 10:43:41.064880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.712 [2024-12-03 10:43:41.064983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.712 [2024-12-03 10:43:41.064991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:10.712 [2024-12-03 10:43:41.064998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:10.712 [2024-12-03 10:43:41.065004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.712 [2024-12-03 10:43:41.083781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.712 [2024-12-03 10:43:41.083803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:10.712 [2024-12-03 10:43:41.083810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.765 ms 00:16:10.712 [2024-12-03 10:43:41.083816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.712 [2024-12-03 10:43:41.102147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.712 [2024-12-03 10:43:41.102168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:10.712 [2024-12-03 10:43:41.102175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.298 ms 00:16:10.712 [2024-12-03 10:43:41.102181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.712 [2024-12-03 10:43:41.119968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.712 [2024-12-03 10:43:41.119988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:10.712 [2024-12-03 10:43:41.119995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.754 ms 00:16:10.712 [2024-12-03 10:43:41.120000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.712 [2024-12-03 10:43:41.137449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.712 [2024-12-03 10:43:41.137470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:10.712 [2024-12-03 10:43:41.137478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.396 ms 00:16:10.712 [2024-12-03 10:43:41.137484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.712 [2024-12-03 10:43:41.137516] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:10.712 [2024-12-03 10:43:41.137528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:10.712 [2024-12-03 10:43:41.137536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:10.712 [2024-12-03 10:43:41.137542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:10.712 [2024-12-03 10:43:41.137547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:10.712 [2024-12-03 10:43:41.137553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:10.712 [2024-12-03 10:43:41.137559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:10.712 [2024-12-03 10:43:41.137564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.137996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.138001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.138007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.138013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.138019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.138025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.138030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.138036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.138041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.138047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.138065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.138071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.138078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.138084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:10.713 [2024-12-03 10:43:41.138090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:10.714 [2024-12-03 10:43:41.138096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:10.714 [2024-12-03 10:43:41.138107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:10.714 [2024-12-03 10:43:41.138113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:10.714 [2024-12-03 10:43:41.138119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:10.714 [2024-12-03 10:43:41.138131] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:10.714 [2024-12-03 10:43:41.138137] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d105f200-e666-43cc-861e-bc4bc7b990de 00:16:10.714 [2024-12-03 10:43:41.138143] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:10.714 [2024-12-03 10:43:41.138150] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:10.714 [2024-12-03 10:43:41.138156] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:10.714 [2024-12-03 10:43:41.138162] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:10.714 [2024-12-03 10:43:41.138168] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:10.714 [2024-12-03 10:43:41.138174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:10.714 [2024-12-03 10:43:41.138182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:10.714 [2024-12-03 10:43:41.138186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:10.714 [2024-12-03 10:43:41.138191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:10.714 [2024-12-03 10:43:41.138196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.714 [2024-12-03 10:43:41.138203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:10.714 [2024-12-03 10:43:41.138210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:16:10.714 [2024-12-03 10:43:41.138216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.148032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.714 [2024-12-03 10:43:41.148051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:10.714 [2024-12-03 10:43:41.148066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.804 ms 00:16:10.714 [2024-12-03 10:43:41.148076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.148248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.714 [2024-12-03 10:43:41.148256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:10.714 [2024-12-03 10:43:41.148262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:16:10.714 [2024-12-03 10:43:41.148267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.179565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.714 [2024-12-03 10:43:41.179594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:10.714 [2024-12-03 10:43:41.179602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.714 [2024-12-03 10:43:41.179612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.179680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.714 [2024-12-03 10:43:41.179686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:10.714 [2024-12-03 10:43:41.179693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.714 [2024-12-03 10:43:41.179698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.179733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.714 [2024-12-03 10:43:41.179740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:10.714 [2024-12-03 10:43:41.179746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.714 [2024-12-03 10:43:41.179751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.179768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.714 [2024-12-03 10:43:41.179774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:10.714 [2024-12-03 10:43:41.179779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.714 [2024-12-03 10:43:41.179785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.239984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.714 [2024-12-03 10:43:41.240014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:10.714 [2024-12-03 10:43:41.240023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.714 [2024-12-03 10:43:41.240032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.263683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.714 [2024-12-03 10:43:41.263706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:10.714 [2024-12-03 10:43:41.263713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.714 [2024-12-03 10:43:41.263720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.263763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.714 [2024-12-03 10:43:41.263770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:10.714 [2024-12-03 10:43:41.263776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.714 [2024-12-03 10:43:41.263782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.263807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.714 [2024-12-03 10:43:41.263817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:10.714 [2024-12-03 10:43:41.263823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.714 [2024-12-03 10:43:41.263829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.263904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.714 [2024-12-03 10:43:41.263912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:10.714 [2024-12-03 10:43:41.263919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.714 [2024-12-03 10:43:41.263925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.263951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.714 [2024-12-03 10:43:41.263960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:10.714 [2024-12-03 10:43:41.263966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.714 [2024-12-03 10:43:41.263972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.264007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.714 [2024-12-03 10:43:41.264013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:10.714 [2024-12-03 10:43:41.264020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.714 [2024-12-03 10:43:41.264025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.264080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.714 [2024-12-03 10:43:41.264090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:10.714 [2024-12-03 10:43:41.264101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.714 [2024-12-03 10:43:41.264107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.714 [2024-12-03 10:43:41.264229] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 246.232 ms, result 0 00:16:11.686 00:16:11.686 00:16:11.686 10:43:41 -- ftl/trim.sh@72 -- # svcpid=72334 00:16:11.686 10:43:41 -- ftl/trim.sh@73 -- # waitforlisten 72334 00:16:11.686 10:43:41 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:11.686 10:43:41 -- common/autotest_common.sh@829 -- # '[' -z 72334 ']' 00:16:11.686 10:43:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.686 10:43:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:11.686 10:43:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.686 10:43:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:11.686 10:43:41 -- common/autotest_common.sh@10 -- # set +x 00:16:11.686 [2024-12-03 10:43:42.068517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:11.686 [2024-12-03 10:43:42.068627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72334 ] 00:16:11.686 [2024-12-03 10:43:42.212657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.944 [2024-12-03 10:43:42.373153] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:11.944 [2024-12-03 10:43:42.373335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.511 10:43:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.511 10:43:42 -- common/autotest_common.sh@862 -- # return 0 00:16:12.511 10:43:42 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:12.511 [2024-12-03 10:43:43.073704] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:12.511 [2024-12-03 10:43:43.073755] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:12.772 [2024-12-03 10:43:43.238847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-12-03 10:43:43.238884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:12.772 [2024-12-03 10:43:43.238897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:12.772 [2024-12-03 10:43:43.238903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-12-03 10:43:43.241091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-12-03 10:43:43.241118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:12.772 [2024-12-03 10:43:43.241127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.173 ms 00:16:12.772 [2024-12-03 10:43:43.241133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-12-03 10:43:43.241194] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:12.772 [2024-12-03 10:43:43.241831] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:12.772 [2024-12-03 10:43:43.241856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-12-03 10:43:43.241863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:12.772 [2024-12-03 10:43:43.241871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:16:12.772 [2024-12-03 10:43:43.241877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-12-03 10:43:43.243158] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:12.772 [2024-12-03 10:43:43.253918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-12-03 10:43:43.253947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:12.772 [2024-12-03 10:43:43.253957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.765 ms 00:16:12.772 [2024-12-03 10:43:43.253965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-12-03 10:43:43.254029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-12-03 10:43:43.254040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:12.772 [2024-12-03 10:43:43.254047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:12.772 [2024-12-03 10:43:43.254062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-12-03 10:43:43.260239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-12-03 10:43:43.260267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:12.772 [2024-12-03 10:43:43.260274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.136 ms 00:16:12.772 [2024-12-03 10:43:43.260282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-12-03 10:43:43.260348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-12-03 10:43:43.260357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:12.772 [2024-12-03 10:43:43.260363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:12.772 [2024-12-03 10:43:43.260371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-12-03 10:43:43.260392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-12-03 10:43:43.260401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:12.772 [2024-12-03 10:43:43.260407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:12.772 [2024-12-03 10:43:43.260416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-12-03 10:43:43.260439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:12.772 [2024-12-03 10:43:43.263635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-12-03 10:43:43.263657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:12.772 [2024-12-03 10:43:43.263666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.203 ms 00:16:12.772 [2024-12-03 10:43:43.263672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-12-03 10:43:43.263705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-12-03 10:43:43.263712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:12.772 [2024-12-03 10:43:43.263719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:12.772 [2024-12-03 10:43:43.263727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-12-03 10:43:43.263745] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:12.772 [2024-12-03 10:43:43.263762] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:12.772 [2024-12-03 10:43:43.263792] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:12.772 [2024-12-03 10:43:43.263807] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:12.772 [2024-12-03 10:43:43.263869] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:12.772 [2024-12-03 10:43:43.263877] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:12.772 [2024-12-03 10:43:43.263891] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:12.772 [2024-12-03 10:43:43.263899] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:12.772 [2024-12-03 10:43:43.263908] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:12.772 [2024-12-03 10:43:43.263915] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:12.772 [2024-12-03 10:43:43.263922] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:12.772 [2024-12-03 10:43:43.263928] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:12.772 [2024-12-03 10:43:43.263936] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:12.772 [2024-12-03 10:43:43.263942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-12-03 10:43:43.263949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:12.772 [2024-12-03 10:43:43.263955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:16:12.772 [2024-12-03 10:43:43.263962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-12-03 10:43:43.264015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-12-03 10:43:43.264022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:12.772 [2024-12-03 10:43:43.264028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:12.772 [2024-12-03 10:43:43.264034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-12-03 10:43:43.264113] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:12.772 [2024-12-03 10:43:43.264127] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:12.772 [2024-12-03 10:43:43.264134] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:12.772 [2024-12-03 10:43:43.264142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.772 [2024-12-03 10:43:43.264148] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:12.772 [2024-12-03 10:43:43.264156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:12.772 [2024-12-03 10:43:43.264162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:12.772 [2024-12-03 10:43:43.264173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:12.772 [2024-12-03 10:43:43.264178] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:12.772 [2024-12-03 10:43:43.264186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:12.772 [2024-12-03 10:43:43.264192] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:12.772 [2024-12-03 10:43:43.264199] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:12.772 [2024-12-03 10:43:43.264204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:12.772 [2024-12-03 10:43:43.264211] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:12.772 [2024-12-03 10:43:43.264216] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:12.772 [2024-12-03 10:43:43.264223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.772 [2024-12-03 10:43:43.264228] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:12.772 [2024-12-03 10:43:43.264235] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:12.772 [2024-12-03 10:43:43.264240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.772 [2024-12-03 10:43:43.264247] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:12.773 [2024-12-03 10:43:43.264252] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:12.773 [2024-12-03 10:43:43.264259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:12.773 [2024-12-03 10:43:43.264264] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:12.773 [2024-12-03 10:43:43.264272] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:12.773 [2024-12-03 10:43:43.264277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:12.773 [2024-12-03 10:43:43.264288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:12.773 [2024-12-03 10:43:43.264294] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:12.773 [2024-12-03 10:43:43.264300] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:12.773 [2024-12-03 10:43:43.264305] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:12.773 [2024-12-03 10:43:43.264312] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:12.773 [2024-12-03 10:43:43.264316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:12.773 [2024-12-03 10:43:43.264324] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:12.773 [2024-12-03 10:43:43.264329] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:12.773 [2024-12-03 10:43:43.264335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:12.773 [2024-12-03 10:43:43.264340] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:12.773 [2024-12-03 10:43:43.264346] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:12.773 [2024-12-03 10:43:43.264351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:12.773 [2024-12-03 10:43:43.264358] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:12.773 [2024-12-03 10:43:43.264364] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:12.773 [2024-12-03 10:43:43.264372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:12.773 [2024-12-03 10:43:43.264377] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:12.773 [2024-12-03 10:43:43.264385] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:12.773 [2024-12-03 10:43:43.264391] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:12.773 [2024-12-03 10:43:43.264397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.773 [2024-12-03 10:43:43.264403] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:12.773 [2024-12-03 10:43:43.264410] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:12.773 [2024-12-03 10:43:43.264414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:12.773 [2024-12-03 10:43:43.264421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:12.773 [2024-12-03 10:43:43.264426] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:12.773 [2024-12-03 10:43:43.264433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:12.773 [2024-12-03 10:43:43.264439] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:12.773 [2024-12-03 10:43:43.264450] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:12.773 [2024-12-03 10:43:43.264456] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:12.773 [2024-12-03 10:43:43.264463] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:12.773 [2024-12-03 10:43:43.264468] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:12.773 [2024-12-03 10:43:43.264477] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:12.773 [2024-12-03 10:43:43.264483] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:12.773 [2024-12-03 10:43:43.264491] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:12.773 [2024-12-03 10:43:43.264497] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:12.773 [2024-12-03 10:43:43.264504] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:12.773 [2024-12-03 10:43:43.264509] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:12.773 [2024-12-03 10:43:43.264517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:12.773 [2024-12-03 10:43:43.264522] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:12.773 [2024-12-03 10:43:43.264529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:12.773 [2024-12-03 10:43:43.264535] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:12.773 [2024-12-03 10:43:43.264541] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:12.773 [2024-12-03 10:43:43.264548] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:12.773 [2024-12-03 10:43:43.264559] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:12.773 [2024-12-03 10:43:43.264564] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:12.773 [2024-12-03 10:43:43.264572] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:12.773 [2024-12-03 10:43:43.264580] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:12.773 [2024-12-03 10:43:43.264589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.773 [2024-12-03 10:43:43.264595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:12.773 [2024-12-03 10:43:43.264602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:16:12.773 [2024-12-03 10:43:43.264608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.773 [2024-12-03 10:43:43.278384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.773 [2024-12-03 10:43:43.278410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:12.773 [2024-12-03 10:43:43.278422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.729 ms 00:16:12.773 [2024-12-03 10:43:43.278432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.773 [2024-12-03 10:43:43.278522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.773 [2024-12-03 10:43:43.278530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:12.773 [2024-12-03 10:43:43.278539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:12.773 [2024-12-03 10:43:43.278545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.773 [2024-12-03 10:43:43.305359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.773 [2024-12-03 10:43:43.305385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:12.773 [2024-12-03 10:43:43.305397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.795 ms 00:16:12.773 [2024-12-03 10:43:43.305404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.773 [2024-12-03 10:43:43.305451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.773 [2024-12-03 10:43:43.305461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:12.773 [2024-12-03 10:43:43.305470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:12.773 [2024-12-03 10:43:43.305476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.773 [2024-12-03 10:43:43.305855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.773 [2024-12-03 10:43:43.305873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:12.773 [2024-12-03 10:43:43.305883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:16:12.773 [2024-12-03 10:43:43.305889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.773 [2024-12-03 10:43:43.305987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.773 [2024-12-03 10:43:43.305993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:12.773 [2024-12-03 10:43:43.306003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:12.773 [2024-12-03 10:43:43.306008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.773 [2024-12-03 10:43:43.319693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.773 [2024-12-03 10:43:43.319716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:12.773 [2024-12-03 10:43:43.319727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.667 ms 00:16:12.773 [2024-12-03 10:43:43.319732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.773 [2024-12-03 10:43:43.330452] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:12.773 [2024-12-03 10:43:43.330477] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:12.773 [2024-12-03 10:43:43.330488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.773 [2024-12-03 10:43:43.330494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:12.773 [2024-12-03 10:43:43.330503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.674 ms 00:16:12.773 [2024-12-03 10:43:43.330508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.773 [2024-12-03 10:43:43.349142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.773 [2024-12-03 10:43:43.349166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:12.773 [2024-12-03 10:43:43.349177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.578 ms 00:16:12.773 [2024-12-03 10:43:43.349183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.773 [2024-12-03 10:43:43.358572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.773 [2024-12-03 10:43:43.358602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:12.773 [2024-12-03 10:43:43.358611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.336 ms 00:16:12.773 [2024-12-03 10:43:43.358616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.773 [2024-12-03 10:43:43.367695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.773 [2024-12-03 10:43:43.367719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:12.773 [2024-12-03 10:43:43.367731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.036 ms 00:16:12.774 [2024-12-03 10:43:43.367737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.774 [2024-12-03 10:43:43.368015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.774 [2024-12-03 10:43:43.368025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:12.774 [2024-12-03 10:43:43.368036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:16:12.774 [2024-12-03 10:43:43.368042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.032 [2024-12-03 10:43:43.416747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.032 [2024-12-03 10:43:43.416776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:13.032 [2024-12-03 10:43:43.416788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.676 ms 00:16:13.032 [2024-12-03 10:43:43.416795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.032 [2024-12-03 10:43:43.424911] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:13.032 [2024-12-03 10:43:43.439193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.032 [2024-12-03 10:43:43.439222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:13.032 [2024-12-03 10:43:43.439232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.339 ms 00:16:13.032 [2024-12-03 10:43:43.439241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.032 [2024-12-03 10:43:43.439297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.032 [2024-12-03 10:43:43.439308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:13.032 [2024-12-03 10:43:43.439315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:13.032 [2024-12-03 10:43:43.439325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.032 [2024-12-03 10:43:43.439369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.032 [2024-12-03 10:43:43.439378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:13.032 [2024-12-03 10:43:43.439384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:13.032 [2024-12-03 10:43:43.439391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.032 [2024-12-03 10:43:43.440443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.032 [2024-12-03 10:43:43.440465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:13.033 [2024-12-03 10:43:43.440472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:16:13.033 [2024-12-03 10:43:43.440480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.033 [2024-12-03 10:43:43.440510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.033 [2024-12-03 10:43:43.440521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:13.033 [2024-12-03 10:43:43.440528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:13.033 [2024-12-03 10:43:43.440535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.033 [2024-12-03 10:43:43.440564] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:13.033 [2024-12-03 10:43:43.440575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.033 [2024-12-03 10:43:43.440581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:13.033 [2024-12-03 10:43:43.440589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:13.033 [2024-12-03 10:43:43.440595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.033 [2024-12-03 10:43:43.459741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.033 [2024-12-03 10:43:43.459767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:13.033 [2024-12-03 10:43:43.459779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.126 ms 00:16:13.033 [2024-12-03 10:43:43.459785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.033 [2024-12-03 10:43:43.459856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.033 [2024-12-03 10:43:43.459864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:13.033 [2024-12-03 10:43:43.459873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:13.033 [2024-12-03 10:43:43.459881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.033 [2024-12-03 10:43:43.460738] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:13.033 [2024-12-03 10:43:43.463172] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.645 ms, result 0 00:16:13.033 [2024-12-03 10:43:43.464968] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:13.033 Some configs were skipped because the RPC state that can call them passed over. 00:16:13.033 10:43:43 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:13.291 [2024-12-03 10:43:43.690597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.291 [2024-12-03 10:43:43.690630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:13.291 [2024-12-03 10:43:43.690640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.833 ms 00:16:13.291 [2024-12-03 10:43:43.690649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.291 [2024-12-03 10:43:43.690677] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.913 ms, result 0 00:16:13.291 true 00:16:13.291 10:43:43 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:13.291 [2024-12-03 10:43:43.901135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.291 [2024-12-03 10:43:43.901162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:13.291 [2024-12-03 10:43:43.901171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.285 ms 00:16:13.291 [2024-12-03 10:43:43.901177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.291 [2024-12-03 10:43:43.901206] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.354 ms, result 0 00:16:13.551 true 00:16:13.551 10:43:43 -- ftl/trim.sh@81 -- # killprocess 72334 00:16:13.551 10:43:43 -- common/autotest_common.sh@936 -- # '[' -z 72334 ']' 00:16:13.551 10:43:43 -- common/autotest_common.sh@940 -- # kill -0 72334 00:16:13.551 10:43:43 -- common/autotest_common.sh@941 -- # uname 00:16:13.551 10:43:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:13.551 10:43:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72334 00:16:13.551 10:43:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:13.551 killing process with pid 72334 00:16:13.551 10:43:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:13.551 10:43:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72334' 00:16:13.551 10:43:43 -- common/autotest_common.sh@955 -- # kill 72334 00:16:13.551 10:43:43 -- common/autotest_common.sh@960 -- # wait 72334 00:16:14.120 [2024-12-03 10:43:44.520388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.120 [2024-12-03 10:43:44.520441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:14.120 [2024-12-03 10:43:44.520452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:14.120 [2024-12-03 10:43:44.520460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.120 [2024-12-03 10:43:44.520482] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:14.120 [2024-12-03 10:43:44.522614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.120 [2024-12-03 10:43:44.522639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:14.120 [2024-12-03 10:43:44.522651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:16:14.120 [2024-12-03 10:43:44.522658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.120 [2024-12-03 10:43:44.522900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.120 [2024-12-03 10:43:44.522908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:14.120 [2024-12-03 10:43:44.522917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:16:14.120 [2024-12-03 10:43:44.522923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.120 [2024-12-03 10:43:44.526516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.120 [2024-12-03 10:43:44.526543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:14.120 [2024-12-03 10:43:44.526554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.574 ms 00:16:14.120 [2024-12-03 10:43:44.526560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.120 [2024-12-03 10:43:44.531859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.120 [2024-12-03 10:43:44.531903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:14.120 [2024-12-03 10:43:44.531913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.267 ms 00:16:14.120 [2024-12-03 10:43:44.531920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.120 [2024-12-03 10:43:44.540225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.120 [2024-12-03 10:43:44.540249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:14.120 [2024-12-03 10:43:44.540261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.252 ms 00:16:14.120 [2024-12-03 10:43:44.540267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.120 [2024-12-03 10:43:44.547542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.120 [2024-12-03 10:43:44.547569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:14.120 [2024-12-03 10:43:44.547578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.243 ms 00:16:14.120 [2024-12-03 10:43:44.547591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.120 [2024-12-03 10:43:44.547703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.120 [2024-12-03 10:43:44.547711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:14.120 [2024-12-03 10:43:44.547721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:14.120 [2024-12-03 10:43:44.547726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.120 [2024-12-03 10:43:44.556531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.120 [2024-12-03 10:43:44.556554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:14.120 [2024-12-03 10:43:44.556562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.786 ms 00:16:14.120 [2024-12-03 10:43:44.556568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.120 [2024-12-03 10:43:44.564920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.121 [2024-12-03 10:43:44.564943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:14.121 [2024-12-03 10:43:44.564955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.322 ms 00:16:14.121 [2024-12-03 10:43:44.564961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.121 [2024-12-03 10:43:44.572767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.121 [2024-12-03 10:43:44.572791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:14.121 [2024-12-03 10:43:44.572799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.777 ms 00:16:14.121 [2024-12-03 10:43:44.572804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.121 [2024-12-03 10:43:44.580651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.121 [2024-12-03 10:43:44.580674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:14.121 [2024-12-03 10:43:44.580682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.786 ms 00:16:14.121 [2024-12-03 10:43:44.580688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.121 [2024-12-03 10:43:44.580715] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:14.121 [2024-12-03 10:43:44.580728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.580999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:14.121 [2024-12-03 10:43:44.581272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:14.122 [2024-12-03 10:43:44.581425] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:14.122 [2024-12-03 10:43:44.581434] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d105f200-e666-43cc-861e-bc4bc7b990de 00:16:14.122 [2024-12-03 10:43:44.581441] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:14.122 [2024-12-03 10:43:44.581448] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:14.122 [2024-12-03 10:43:44.581454] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:14.122 [2024-12-03 10:43:44.581462] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:14.122 [2024-12-03 10:43:44.581468] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:14.122 [2024-12-03 10:43:44.581475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:14.122 [2024-12-03 10:43:44.581481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:14.122 [2024-12-03 10:43:44.581487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:14.122 [2024-12-03 10:43:44.581492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:14.122 [2024-12-03 10:43:44.581499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.122 [2024-12-03 10:43:44.581505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:14.122 [2024-12-03 10:43:44.581513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:16:14.122 [2024-12-03 10:43:44.581522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.591363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.122 [2024-12-03 10:43:44.591387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:14.122 [2024-12-03 10:43:44.591398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.822 ms 00:16:14.122 [2024-12-03 10:43:44.591404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.591597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.122 [2024-12-03 10:43:44.591606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:14.122 [2024-12-03 10:43:44.591616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:16:14.122 [2024-12-03 10:43:44.591622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.628454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.122 [2024-12-03 10:43:44.628480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:14.122 [2024-12-03 10:43:44.628490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.122 [2024-12-03 10:43:44.628497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.628561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.122 [2024-12-03 10:43:44.628568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:14.122 [2024-12-03 10:43:44.628578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.122 [2024-12-03 10:43:44.628584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.628620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.122 [2024-12-03 10:43:44.628627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:14.122 [2024-12-03 10:43:44.628636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.122 [2024-12-03 10:43:44.628642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.628659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.122 [2024-12-03 10:43:44.628666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:14.122 [2024-12-03 10:43:44.628674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.122 [2024-12-03 10:43:44.628681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.692218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.122 [2024-12-03 10:43:44.692250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:14.122 [2024-12-03 10:43:44.692261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.122 [2024-12-03 10:43:44.692268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.716143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.122 [2024-12-03 10:43:44.716170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:14.122 [2024-12-03 10:43:44.716180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.122 [2024-12-03 10:43:44.716189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.716234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.122 [2024-12-03 10:43:44.716241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:14.122 [2024-12-03 10:43:44.716251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.122 [2024-12-03 10:43:44.716257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.716285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.122 [2024-12-03 10:43:44.716292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:14.122 [2024-12-03 10:43:44.716299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.122 [2024-12-03 10:43:44.716305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.716381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.122 [2024-12-03 10:43:44.716391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:14.122 [2024-12-03 10:43:44.716398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.122 [2024-12-03 10:43:44.716404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.716435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.122 [2024-12-03 10:43:44.716442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:14.122 [2024-12-03 10:43:44.716449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.122 [2024-12-03 10:43:44.716454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.716490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.122 [2024-12-03 10:43:44.716497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:14.122 [2024-12-03 10:43:44.716507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.122 [2024-12-03 10:43:44.716512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.716555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.122 [2024-12-03 10:43:44.716562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:14.122 [2024-12-03 10:43:44.716569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.122 [2024-12-03 10:43:44.716575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.122 [2024-12-03 10:43:44.716700] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 196.294 ms, result 0 00:16:15.055 10:43:45 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:15.055 10:43:45 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:15.056 [2024-12-03 10:43:45.470876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:15.056 [2024-12-03 10:43:45.470985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72387 ] 00:16:15.056 [2024-12-03 10:43:45.618346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.313 [2024-12-03 10:43:45.784192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.597 [2024-12-03 10:43:46.009612] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:15.597 [2024-12-03 10:43:46.009663] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:15.597 [2024-12-03 10:43:46.158358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.597 [2024-12-03 10:43:46.158394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:15.597 [2024-12-03 10:43:46.158406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:15.597 [2024-12-03 10:43:46.158412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.597 [2024-12-03 10:43:46.160583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.597 [2024-12-03 10:43:46.160613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:15.597 [2024-12-03 10:43:46.160621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.159 ms 00:16:15.597 [2024-12-03 10:43:46.160627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.597 [2024-12-03 10:43:46.160684] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:15.597 [2024-12-03 10:43:46.161269] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:15.597 [2024-12-03 10:43:46.161286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.597 [2024-12-03 10:43:46.161292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:15.597 [2024-12-03 10:43:46.161299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:16:15.597 [2024-12-03 10:43:46.161304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.597 [2024-12-03 10:43:46.162566] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:15.597 [2024-12-03 10:43:46.173323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.597 [2024-12-03 10:43:46.173349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:15.597 [2024-12-03 10:43:46.173358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.759 ms 00:16:15.597 [2024-12-03 10:43:46.173365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.597 [2024-12-03 10:43:46.173433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.597 [2024-12-03 10:43:46.173441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:15.597 [2024-12-03 10:43:46.173448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:15.597 [2024-12-03 10:43:46.173454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.597 [2024-12-03 10:43:46.179609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.597 [2024-12-03 10:43:46.179631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:15.597 [2024-12-03 10:43:46.179639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.124 ms 00:16:15.597 [2024-12-03 10:43:46.179650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.597 [2024-12-03 10:43:46.179728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.597 [2024-12-03 10:43:46.179736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:15.597 [2024-12-03 10:43:46.179742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:15.597 [2024-12-03 10:43:46.179748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.597 [2024-12-03 10:43:46.179767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.597 [2024-12-03 10:43:46.179773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:15.597 [2024-12-03 10:43:46.179781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:15.597 [2024-12-03 10:43:46.179788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.597 [2024-12-03 10:43:46.179812] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:15.857 [2024-12-03 10:43:46.182958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.857 [2024-12-03 10:43:46.182980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:15.857 [2024-12-03 10:43:46.182987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.156 ms 00:16:15.857 [2024-12-03 10:43:46.182995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.857 [2024-12-03 10:43:46.183027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.857 [2024-12-03 10:43:46.183033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:15.857 [2024-12-03 10:43:46.183039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:15.857 [2024-12-03 10:43:46.183044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.857 [2024-12-03 10:43:46.183066] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:15.857 [2024-12-03 10:43:46.183083] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:15.857 [2024-12-03 10:43:46.183110] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:15.857 [2024-12-03 10:43:46.183125] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:15.857 [2024-12-03 10:43:46.183187] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:15.857 [2024-12-03 10:43:46.183197] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:15.857 [2024-12-03 10:43:46.183205] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:15.857 [2024-12-03 10:43:46.183215] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:15.857 [2024-12-03 10:43:46.183223] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:15.857 [2024-12-03 10:43:46.183229] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:15.857 [2024-12-03 10:43:46.183234] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:15.857 [2024-12-03 10:43:46.183240] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:15.857 [2024-12-03 10:43:46.183248] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:15.857 [2024-12-03 10:43:46.183255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.857 [2024-12-03 10:43:46.183261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:15.857 [2024-12-03 10:43:46.183266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:16:15.857 [2024-12-03 10:43:46.183272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.857 [2024-12-03 10:43:46.183322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.857 [2024-12-03 10:43:46.183330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:15.857 [2024-12-03 10:43:46.183337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:15.857 [2024-12-03 10:43:46.183342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.857 [2024-12-03 10:43:46.183401] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:15.857 [2024-12-03 10:43:46.183408] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:15.857 [2024-12-03 10:43:46.183415] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:15.857 [2024-12-03 10:43:46.183420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.857 [2024-12-03 10:43:46.183426] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:15.857 [2024-12-03 10:43:46.183432] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:15.857 [2024-12-03 10:43:46.183437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:15.857 [2024-12-03 10:43:46.183442] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:15.857 [2024-12-03 10:43:46.183448] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:15.857 [2024-12-03 10:43:46.183454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:15.857 [2024-12-03 10:43:46.183459] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:15.857 [2024-12-03 10:43:46.183464] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:15.857 [2024-12-03 10:43:46.183469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:15.857 [2024-12-03 10:43:46.183474] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:15.857 [2024-12-03 10:43:46.183484] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:15.857 [2024-12-03 10:43:46.183489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.857 [2024-12-03 10:43:46.183494] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:15.857 [2024-12-03 10:43:46.183500] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:15.857 [2024-12-03 10:43:46.183504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.857 [2024-12-03 10:43:46.183509] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:15.857 [2024-12-03 10:43:46.183514] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:15.857 [2024-12-03 10:43:46.183519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:15.857 [2024-12-03 10:43:46.183524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:15.857 [2024-12-03 10:43:46.183529] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:15.857 [2024-12-03 10:43:46.183534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:15.857 [2024-12-03 10:43:46.183539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:15.857 [2024-12-03 10:43:46.183544] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:15.857 [2024-12-03 10:43:46.183549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:15.857 [2024-12-03 10:43:46.183553] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:15.857 [2024-12-03 10:43:46.183558] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:15.857 [2024-12-03 10:43:46.183563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:15.857 [2024-12-03 10:43:46.183567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:15.857 [2024-12-03 10:43:46.183572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:15.857 [2024-12-03 10:43:46.183577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:15.857 [2024-12-03 10:43:46.183591] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:15.857 [2024-12-03 10:43:46.183595] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:15.857 [2024-12-03 10:43:46.183600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:15.857 [2024-12-03 10:43:46.183606] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:15.857 [2024-12-03 10:43:46.183611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:15.857 [2024-12-03 10:43:46.183616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:15.857 [2024-12-03 10:43:46.183624] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:15.857 [2024-12-03 10:43:46.183630] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:15.857 [2024-12-03 10:43:46.183636] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:15.857 [2024-12-03 10:43:46.183645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.857 [2024-12-03 10:43:46.183651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:15.857 [2024-12-03 10:43:46.183656] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:15.857 [2024-12-03 10:43:46.183661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:15.857 [2024-12-03 10:43:46.183666] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:15.857 [2024-12-03 10:43:46.183671] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:15.857 [2024-12-03 10:43:46.183677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:15.857 [2024-12-03 10:43:46.183682] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:15.857 [2024-12-03 10:43:46.183689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:15.857 [2024-12-03 10:43:46.183696] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:15.857 [2024-12-03 10:43:46.183702] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:15.857 [2024-12-03 10:43:46.183707] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:15.857 [2024-12-03 10:43:46.183712] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:15.858 [2024-12-03 10:43:46.183718] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:15.858 [2024-12-03 10:43:46.183723] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:15.858 [2024-12-03 10:43:46.183728] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:15.858 [2024-12-03 10:43:46.183734] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:15.858 [2024-12-03 10:43:46.183739] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:15.858 [2024-12-03 10:43:46.183744] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:15.858 [2024-12-03 10:43:46.183749] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:15.858 [2024-12-03 10:43:46.183755] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:15.858 [2024-12-03 10:43:46.183761] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:15.858 [2024-12-03 10:43:46.183766] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:15.858 [2024-12-03 10:43:46.183776] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:15.858 [2024-12-03 10:43:46.183782] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:15.858 [2024-12-03 10:43:46.183787] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:15.858 [2024-12-03 10:43:46.183793] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:15.858 [2024-12-03 10:43:46.183798] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:15.858 [2024-12-03 10:43:46.183804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.183811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:15.858 [2024-12-03 10:43:46.183816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:16:15.858 [2024-12-03 10:43:46.183822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.197649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.197675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:15.858 [2024-12-03 10:43:46.197684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.776 ms 00:16:15.858 [2024-12-03 10:43:46.197691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.197783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.197791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:15.858 [2024-12-03 10:43:46.197799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:15.858 [2024-12-03 10:43:46.197805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.241891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.241922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:15.858 [2024-12-03 10:43:46.241933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.067 ms 00:16:15.858 [2024-12-03 10:43:46.241940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.241997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.242005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:15.858 [2024-12-03 10:43:46.242015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:15.858 [2024-12-03 10:43:46.242021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.242416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.242434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:15.858 [2024-12-03 10:43:46.242441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:16:15.858 [2024-12-03 10:43:46.242448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.242548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.242555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:15.858 [2024-12-03 10:43:46.242562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:16:15.858 [2024-12-03 10:43:46.242568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.255479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.255502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:15.858 [2024-12-03 10:43:46.255510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.892 ms 00:16:15.858 [2024-12-03 10:43:46.255518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.266229] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:15.858 [2024-12-03 10:43:46.266256] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:15.858 [2024-12-03 10:43:46.266266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.266273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:15.858 [2024-12-03 10:43:46.266280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.661 ms 00:16:15.858 [2024-12-03 10:43:46.266286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.284933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.284963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:15.858 [2024-12-03 10:43:46.284972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.592 ms 00:16:15.858 [2024-12-03 10:43:46.284978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.294356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.294380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:15.858 [2024-12-03 10:43:46.294393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.324 ms 00:16:15.858 [2024-12-03 10:43:46.294398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.303623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.303649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:15.858 [2024-12-03 10:43:46.303656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.186 ms 00:16:15.858 [2024-12-03 10:43:46.303662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.303939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.303949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:15.858 [2024-12-03 10:43:46.303955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:16:15.858 [2024-12-03 10:43:46.303963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.352767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.352795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:15.858 [2024-12-03 10:43:46.352805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.787 ms 00:16:15.858 [2024-12-03 10:43:46.352815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.360658] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:15.858 [2024-12-03 10:43:46.374798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.374825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:15.858 [2024-12-03 10:43:46.374836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.923 ms 00:16:15.858 [2024-12-03 10:43:46.374842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.374901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.374909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:15.858 [2024-12-03 10:43:46.374919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:15.858 [2024-12-03 10:43:46.374925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.374970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.374977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:15.858 [2024-12-03 10:43:46.374984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:15.858 [2024-12-03 10:43:46.374989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.376026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.376049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:15.858 [2024-12-03 10:43:46.376070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:16:15.858 [2024-12-03 10:43:46.376076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.376104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.376113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:15.858 [2024-12-03 10:43:46.376120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:15.858 [2024-12-03 10:43:46.376127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.376158] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:15.858 [2024-12-03 10:43:46.376165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.376172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:15.858 [2024-12-03 10:43:46.376178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:15.858 [2024-12-03 10:43:46.376184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.395417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.395443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:15.858 [2024-12-03 10:43:46.395451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.213 ms 00:16:15.858 [2024-12-03 10:43:46.395457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.858 [2024-12-03 10:43:46.395525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.858 [2024-12-03 10:43:46.395533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:15.858 [2024-12-03 10:43:46.395540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:15.859 [2024-12-03 10:43:46.395546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.859 [2024-12-03 10:43:46.396537] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:15.859 [2024-12-03 10:43:46.398988] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 237.924 ms, result 0 00:16:15.859 [2024-12-03 10:43:46.399937] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:15.859 [2024-12-03 10:43:46.411011] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:17.236  [2024-12-03T10:43:48.417Z] Copying: 17/256 [MB] (17 MBps) [2024-12-03T10:43:49.791Z] Copying: 28/256 [MB] (10 MBps) [2024-12-03T10:43:50.727Z] Copying: 40/256 [MB] (12 MBps) [2024-12-03T10:43:51.661Z] Copying: 53/256 [MB] (12 MBps) [2024-12-03T10:43:52.597Z] Copying: 65/256 [MB] (12 MBps) [2024-12-03T10:43:53.532Z] Copying: 77/256 [MB] (11 MBps) [2024-12-03T10:43:54.467Z] Copying: 89/256 [MB] (11 MBps) [2024-12-03T10:43:55.841Z] Copying: 101/256 [MB] (11 MBps) [2024-12-03T10:43:56.777Z] Copying: 112/256 [MB] (11 MBps) [2024-12-03T10:43:57.734Z] Copying: 124/256 [MB] (11 MBps) [2024-12-03T10:43:58.671Z] Copying: 136/256 [MB] (11 MBps) [2024-12-03T10:43:59.608Z] Copying: 148/256 [MB] (12 MBps) [2024-12-03T10:44:00.551Z] Copying: 160/256 [MB] (12 MBps) [2024-12-03T10:44:01.487Z] Copying: 170/256 [MB] (10 MBps) [2024-12-03T10:44:02.421Z] Copying: 182/256 [MB] (11 MBps) [2024-12-03T10:44:03.797Z] Copying: 194/256 [MB] (11 MBps) [2024-12-03T10:44:04.731Z] Copying: 206/256 [MB] (11 MBps) [2024-12-03T10:44:05.664Z] Copying: 217/256 [MB] (11 MBps) [2024-12-03T10:44:06.599Z] Copying: 229/256 [MB] (11 MBps) [2024-12-03T10:44:07.533Z] Copying: 241/256 [MB] (12 MBps) [2024-12-03T10:44:07.794Z] Copying: 253/256 [MB] (11 MBps) [2024-12-03T10:44:07.794Z] Copying: 256/256 [MB] (average 12 MBps)[2024-12-03 10:44:07.620410] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:37.181 [2024-12-03 10:44:07.627762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.181 [2024-12-03 10:44:07.627798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:37.181 [2024-12-03 10:44:07.627814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:37.181 [2024-12-03 10:44:07.627821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-03 10:44:07.627838] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:37.181 [2024-12-03 10:44:07.630078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.181 [2024-12-03 10:44:07.630101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:37.181 [2024-12-03 10:44:07.630110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.229 ms 00:16:37.181 [2024-12-03 10:44:07.630117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-03 10:44:07.630323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.181 [2024-12-03 10:44:07.630332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:37.181 [2024-12-03 10:44:07.630339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:16:37.181 [2024-12-03 10:44:07.630348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-03 10:44:07.633173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.181 [2024-12-03 10:44:07.633190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:37.181 [2024-12-03 10:44:07.633197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.813 ms 00:16:37.181 [2024-12-03 10:44:07.633204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-03 10:44:07.638382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.181 [2024-12-03 10:44:07.638406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:37.181 [2024-12-03 10:44:07.638414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.157 ms 00:16:37.181 [2024-12-03 10:44:07.638421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-03 10:44:07.657063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.181 [2024-12-03 10:44:07.657090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:37.181 [2024-12-03 10:44:07.657099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.595 ms 00:16:37.181 [2024-12-03 10:44:07.657105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-03 10:44:07.669074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.181 [2024-12-03 10:44:07.669100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:37.181 [2024-12-03 10:44:07.669109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.933 ms 00:16:37.181 [2024-12-03 10:44:07.669117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-03 10:44:07.669221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.181 [2024-12-03 10:44:07.669229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:37.181 [2024-12-03 10:44:07.669236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:37.181 [2024-12-03 10:44:07.669243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-03 10:44:07.688261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.181 [2024-12-03 10:44:07.688286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:37.181 [2024-12-03 10:44:07.688293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.006 ms 00:16:37.181 [2024-12-03 10:44:07.688299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-03 10:44:07.706368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.181 [2024-12-03 10:44:07.706393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:37.181 [2024-12-03 10:44:07.706401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.034 ms 00:16:37.181 [2024-12-03 10:44:07.706407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-03 10:44:07.724332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.181 [2024-12-03 10:44:07.724363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:37.181 [2024-12-03 10:44:07.724371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.890 ms 00:16:37.181 [2024-12-03 10:44:07.724376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-03 10:44:07.742163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.181 [2024-12-03 10:44:07.742186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:37.181 [2024-12-03 10:44:07.742194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.731 ms 00:16:37.181 [2024-12-03 10:44:07.742199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-03 10:44:07.742233] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:37.181 [2024-12-03 10:44:07.742246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:37.181 [2024-12-03 10:44:07.742391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:37.182 [2024-12-03 10:44:07.742801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:37.183 [2024-12-03 10:44:07.742806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:37.183 [2024-12-03 10:44:07.742812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:37.183 [2024-12-03 10:44:07.742819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:37.183 [2024-12-03 10:44:07.742831] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:37.183 [2024-12-03 10:44:07.742837] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d105f200-e666-43cc-861e-bc4bc7b990de 00:16:37.183 [2024-12-03 10:44:07.742844] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:37.183 [2024-12-03 10:44:07.742850] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:37.183 [2024-12-03 10:44:07.742855] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:37.183 [2024-12-03 10:44:07.742862] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:37.183 [2024-12-03 10:44:07.742867] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:37.183 [2024-12-03 10:44:07.742876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:37.183 [2024-12-03 10:44:07.742882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:37.183 [2024-12-03 10:44:07.742886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:37.183 [2024-12-03 10:44:07.742891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:37.183 [2024-12-03 10:44:07.742897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.183 [2024-12-03 10:44:07.742903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:37.183 [2024-12-03 10:44:07.742909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:16:37.183 [2024-12-03 10:44:07.742915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.183 [2024-12-03 10:44:07.752872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.183 [2024-12-03 10:44:07.752892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:37.183 [2024-12-03 10:44:07.752903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.943 ms 00:16:37.183 [2024-12-03 10:44:07.752910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.183 [2024-12-03 10:44:07.753091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.183 [2024-12-03 10:44:07.753100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:37.183 [2024-12-03 10:44:07.753107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:16:37.183 [2024-12-03 10:44:07.753113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.183 [2024-12-03 10:44:07.784647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.183 [2024-12-03 10:44:07.784674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:37.183 [2024-12-03 10:44:07.784685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.183 [2024-12-03 10:44:07.784691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.183 [2024-12-03 10:44:07.784751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.183 [2024-12-03 10:44:07.784758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:37.183 [2024-12-03 10:44:07.784764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.183 [2024-12-03 10:44:07.784770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.183 [2024-12-03 10:44:07.784802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.183 [2024-12-03 10:44:07.784809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:37.183 [2024-12-03 10:44:07.784816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.183 [2024-12-03 10:44:07.784824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.183 [2024-12-03 10:44:07.784838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.183 [2024-12-03 10:44:07.784844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:37.183 [2024-12-03 10:44:07.784850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.183 [2024-12-03 10:44:07.784856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.442 [2024-12-03 10:44:07.845133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.442 [2024-12-03 10:44:07.845168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:37.442 [2024-12-03 10:44:07.845181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.442 [2024-12-03 10:44:07.845187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.442 [2024-12-03 10:44:07.869221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.442 [2024-12-03 10:44:07.869249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:37.442 [2024-12-03 10:44:07.869257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.442 [2024-12-03 10:44:07.869265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.442 [2024-12-03 10:44:07.869311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.442 [2024-12-03 10:44:07.869319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:37.442 [2024-12-03 10:44:07.869326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.442 [2024-12-03 10:44:07.869332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.442 [2024-12-03 10:44:07.869363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.442 [2024-12-03 10:44:07.869370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:37.442 [2024-12-03 10:44:07.869378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.442 [2024-12-03 10:44:07.869384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.442 [2024-12-03 10:44:07.869460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.442 [2024-12-03 10:44:07.869469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:37.442 [2024-12-03 10:44:07.869476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.442 [2024-12-03 10:44:07.869482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.442 [2024-12-03 10:44:07.869511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.442 [2024-12-03 10:44:07.869519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:37.442 [2024-12-03 10:44:07.869525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.442 [2024-12-03 10:44:07.869532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.442 [2024-12-03 10:44:07.869567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.442 [2024-12-03 10:44:07.869575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:37.442 [2024-12-03 10:44:07.869581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.442 [2024-12-03 10:44:07.869588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.442 [2024-12-03 10:44:07.869632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.442 [2024-12-03 10:44:07.869642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:37.442 [2024-12-03 10:44:07.869650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.442 [2024-12-03 10:44:07.869655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.442 [2024-12-03 10:44:07.869780] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 242.014 ms, result 0 00:16:38.011 00:16:38.011 00:16:38.011 10:44:08 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:38.011 10:44:08 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:38.581 10:44:09 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:38.581 [2024-12-03 10:44:09.171850] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:38.581 [2024-12-03 10:44:09.171945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72645 ] 00:16:38.839 [2024-12-03 10:44:09.314642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.098 [2024-12-03 10:44:09.481590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.098 [2024-12-03 10:44:09.707463] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:39.098 [2024-12-03 10:44:09.707515] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:39.357 [2024-12-03 10:44:09.856049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.357 [2024-12-03 10:44:09.856095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:39.357 [2024-12-03 10:44:09.856106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:39.357 [2024-12-03 10:44:09.856112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.357 [2024-12-03 10:44:09.858326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.357 [2024-12-03 10:44:09.858355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:39.357 [2024-12-03 10:44:09.858363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.201 ms 00:16:39.357 [2024-12-03 10:44:09.858369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.357 [2024-12-03 10:44:09.858426] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:39.357 [2024-12-03 10:44:09.859042] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:39.357 [2024-12-03 10:44:09.859068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.358 [2024-12-03 10:44:09.859075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:39.358 [2024-12-03 10:44:09.859082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:16:39.358 [2024-12-03 10:44:09.859088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.358 [2024-12-03 10:44:09.860385] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:39.358 [2024-12-03 10:44:09.870920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.358 [2024-12-03 10:44:09.870947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:39.358 [2024-12-03 10:44:09.870956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.537 ms 00:16:39.358 [2024-12-03 10:44:09.870962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.358 [2024-12-03 10:44:09.871037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.358 [2024-12-03 10:44:09.871046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:39.358 [2024-12-03 10:44:09.871062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:39.358 [2024-12-03 10:44:09.871068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.358 [2024-12-03 10:44:09.877321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.358 [2024-12-03 10:44:09.877345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:39.358 [2024-12-03 10:44:09.877353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.218 ms 00:16:39.358 [2024-12-03 10:44:09.877362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.358 [2024-12-03 10:44:09.877437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.358 [2024-12-03 10:44:09.877446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:39.358 [2024-12-03 10:44:09.877452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:39.358 [2024-12-03 10:44:09.877458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.358 [2024-12-03 10:44:09.877477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.358 [2024-12-03 10:44:09.877484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:39.358 [2024-12-03 10:44:09.877491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:39.358 [2024-12-03 10:44:09.877497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.358 [2024-12-03 10:44:09.877523] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:39.358 [2024-12-03 10:44:09.880677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.358 [2024-12-03 10:44:09.880700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:39.358 [2024-12-03 10:44:09.880707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.165 ms 00:16:39.358 [2024-12-03 10:44:09.880716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.358 [2024-12-03 10:44:09.880748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.358 [2024-12-03 10:44:09.880754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:39.358 [2024-12-03 10:44:09.880761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:39.358 [2024-12-03 10:44:09.880766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.358 [2024-12-03 10:44:09.880781] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:39.358 [2024-12-03 10:44:09.880796] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:39.358 [2024-12-03 10:44:09.880825] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:39.358 [2024-12-03 10:44:09.880839] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:39.358 [2024-12-03 10:44:09.880899] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:39.358 [2024-12-03 10:44:09.880907] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:39.358 [2024-12-03 10:44:09.880916] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:39.358 [2024-12-03 10:44:09.880923] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:39.358 [2024-12-03 10:44:09.880930] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:39.358 [2024-12-03 10:44:09.880936] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:39.358 [2024-12-03 10:44:09.880942] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:39.358 [2024-12-03 10:44:09.880948] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:39.358 [2024-12-03 10:44:09.880955] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:39.358 [2024-12-03 10:44:09.880962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.358 [2024-12-03 10:44:09.880968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:39.358 [2024-12-03 10:44:09.880974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:16:39.358 [2024-12-03 10:44:09.880979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.358 [2024-12-03 10:44:09.881029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.358 [2024-12-03 10:44:09.881036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:39.358 [2024-12-03 10:44:09.881042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:39.358 [2024-12-03 10:44:09.881047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.358 [2024-12-03 10:44:09.881148] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:39.358 [2024-12-03 10:44:09.881162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:39.358 [2024-12-03 10:44:09.881169] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:39.358 [2024-12-03 10:44:09.881176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.358 [2024-12-03 10:44:09.881183] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:39.358 [2024-12-03 10:44:09.881188] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:39.358 [2024-12-03 10:44:09.881195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:39.358 [2024-12-03 10:44:09.881200] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:39.358 [2024-12-03 10:44:09.881206] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:39.358 [2024-12-03 10:44:09.881211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:39.358 [2024-12-03 10:44:09.881217] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:39.358 [2024-12-03 10:44:09.881222] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:39.358 [2024-12-03 10:44:09.881227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:39.358 [2024-12-03 10:44:09.881233] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:39.358 [2024-12-03 10:44:09.881243] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:39.358 [2024-12-03 10:44:09.881248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.358 [2024-12-03 10:44:09.881253] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:39.358 [2024-12-03 10:44:09.881258] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:39.358 [2024-12-03 10:44:09.881264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.358 [2024-12-03 10:44:09.881269] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:39.358 [2024-12-03 10:44:09.881274] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:39.358 [2024-12-03 10:44:09.881280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:39.358 [2024-12-03 10:44:09.881285] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:39.358 [2024-12-03 10:44:09.881290] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:39.358 [2024-12-03 10:44:09.881295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:39.358 [2024-12-03 10:44:09.881301] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:39.358 [2024-12-03 10:44:09.881307] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:39.358 [2024-12-03 10:44:09.881312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:39.358 [2024-12-03 10:44:09.881317] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:39.358 [2024-12-03 10:44:09.881322] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:39.358 [2024-12-03 10:44:09.881327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:39.358 [2024-12-03 10:44:09.881332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:39.358 [2024-12-03 10:44:09.881337] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:39.358 [2024-12-03 10:44:09.881341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:39.358 [2024-12-03 10:44:09.881346] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:39.358 [2024-12-03 10:44:09.881351] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:39.358 [2024-12-03 10:44:09.881355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:39.358 [2024-12-03 10:44:09.881360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:39.358 [2024-12-03 10:44:09.881367] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:39.358 [2024-12-03 10:44:09.881371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:39.358 [2024-12-03 10:44:09.881376] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:39.358 [2024-12-03 10:44:09.881382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:39.358 [2024-12-03 10:44:09.881388] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:39.358 [2024-12-03 10:44:09.881395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.358 [2024-12-03 10:44:09.881401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:39.358 [2024-12-03 10:44:09.881406] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:39.359 [2024-12-03 10:44:09.881411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:39.359 [2024-12-03 10:44:09.881417] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:39.359 [2024-12-03 10:44:09.881422] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:39.359 [2024-12-03 10:44:09.881427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:39.359 [2024-12-03 10:44:09.881433] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:39.359 [2024-12-03 10:44:09.881441] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:39.359 [2024-12-03 10:44:09.881447] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:39.359 [2024-12-03 10:44:09.881453] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:39.359 [2024-12-03 10:44:09.881459] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:39.359 [2024-12-03 10:44:09.881465] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:39.359 [2024-12-03 10:44:09.881470] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:39.359 [2024-12-03 10:44:09.881476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:39.359 [2024-12-03 10:44:09.881481] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:39.359 [2024-12-03 10:44:09.881487] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:39.359 [2024-12-03 10:44:09.881492] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:39.359 [2024-12-03 10:44:09.881497] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:39.359 [2024-12-03 10:44:09.881503] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:39.359 [2024-12-03 10:44:09.881508] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:39.359 [2024-12-03 10:44:09.881513] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:39.359 [2024-12-03 10:44:09.881519] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:39.359 [2024-12-03 10:44:09.881528] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:39.359 [2024-12-03 10:44:09.881534] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:39.359 [2024-12-03 10:44:09.881539] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:39.359 [2024-12-03 10:44:09.881544] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:39.359 [2024-12-03 10:44:09.881550] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:39.359 [2024-12-03 10:44:09.881556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.359 [2024-12-03 10:44:09.881562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:39.359 [2024-12-03 10:44:09.881568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:16:39.359 [2024-12-03 10:44:09.881574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.359 [2024-12-03 10:44:09.895376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.359 [2024-12-03 10:44:09.895402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:39.359 [2024-12-03 10:44:09.895412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.760 ms 00:16:39.359 [2024-12-03 10:44:09.895419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.359 [2024-12-03 10:44:09.895509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.359 [2024-12-03 10:44:09.895518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:39.359 [2024-12-03 10:44:09.895525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:39.359 [2024-12-03 10:44:09.895532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.359 [2024-12-03 10:44:09.936429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.359 [2024-12-03 10:44:09.936460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:39.359 [2024-12-03 10:44:09.936470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.879 ms 00:16:39.359 [2024-12-03 10:44:09.936478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.359 [2024-12-03 10:44:09.936535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.359 [2024-12-03 10:44:09.936544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:39.359 [2024-12-03 10:44:09.936554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:39.359 [2024-12-03 10:44:09.936559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.359 [2024-12-03 10:44:09.936945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.359 [2024-12-03 10:44:09.936965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:39.359 [2024-12-03 10:44:09.936972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:16:39.359 [2024-12-03 10:44:09.936978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.359 [2024-12-03 10:44:09.937091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.359 [2024-12-03 10:44:09.937100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:39.359 [2024-12-03 10:44:09.937107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:16:39.359 [2024-12-03 10:44:09.937113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.359 [2024-12-03 10:44:09.950001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.359 [2024-12-03 10:44:09.950028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:39.359 [2024-12-03 10:44:09.950036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.871 ms 00:16:39.359 [2024-12-03 10:44:09.950045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.359 [2024-12-03 10:44:09.960811] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:39.359 [2024-12-03 10:44:09.960839] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:39.359 [2024-12-03 10:44:09.960847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.359 [2024-12-03 10:44:09.960854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:39.359 [2024-12-03 10:44:09.960862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.714 ms 00:16:39.359 [2024-12-03 10:44:09.960867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.618 [2024-12-03 10:44:09.980306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.618 [2024-12-03 10:44:09.980337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:39.618 [2024-12-03 10:44:09.980346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.386 ms 00:16:39.618 [2024-12-03 10:44:09.980352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.618 [2024-12-03 10:44:09.989878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.618 [2024-12-03 10:44:09.989903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:39.618 [2024-12-03 10:44:09.989916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.471 ms 00:16:39.618 [2024-12-03 10:44:09.989922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.618 [2024-12-03 10:44:09.998770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.618 [2024-12-03 10:44:09.998794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:39.618 [2024-12-03 10:44:09.998802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.809 ms 00:16:39.618 [2024-12-03 10:44:09.998808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.618 [2024-12-03 10:44:09.999098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.618 [2024-12-03 10:44:09.999109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:39.618 [2024-12-03 10:44:09.999116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:16:39.618 [2024-12-03 10:44:09.999125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.618 [2024-12-03 10:44:10.049009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.618 [2024-12-03 10:44:10.049044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:39.618 [2024-12-03 10:44:10.049060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.866 ms 00:16:39.618 [2024-12-03 10:44:10.049071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.618 [2024-12-03 10:44:10.057221] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:39.618 [2024-12-03 10:44:10.071755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.618 [2024-12-03 10:44:10.071784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:39.618 [2024-12-03 10:44:10.071795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.615 ms 00:16:39.618 [2024-12-03 10:44:10.071803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.618 [2024-12-03 10:44:10.071862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.618 [2024-12-03 10:44:10.071870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:39.618 [2024-12-03 10:44:10.071880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:39.618 [2024-12-03 10:44:10.071887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.618 [2024-12-03 10:44:10.071932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.618 [2024-12-03 10:44:10.071940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:39.618 [2024-12-03 10:44:10.071947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:39.618 [2024-12-03 10:44:10.071953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.618 [2024-12-03 10:44:10.072975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.618 [2024-12-03 10:44:10.073002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:39.618 [2024-12-03 10:44:10.073010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:16:39.619 [2024-12-03 10:44:10.073016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.619 [2024-12-03 10:44:10.073043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.619 [2024-12-03 10:44:10.073066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:39.619 [2024-12-03 10:44:10.073074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:39.619 [2024-12-03 10:44:10.073080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.619 [2024-12-03 10:44:10.073110] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:39.619 [2024-12-03 10:44:10.073118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.619 [2024-12-03 10:44:10.073124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:39.619 [2024-12-03 10:44:10.073130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:39.619 [2024-12-03 10:44:10.073136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.619 [2024-12-03 10:44:10.092644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.619 [2024-12-03 10:44:10.092672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:39.619 [2024-12-03 10:44:10.092680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.488 ms 00:16:39.619 [2024-12-03 10:44:10.092687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.619 [2024-12-03 10:44:10.092758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.619 [2024-12-03 10:44:10.092767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:39.619 [2024-12-03 10:44:10.092774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:39.619 [2024-12-03 10:44:10.092780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.619 [2024-12-03 10:44:10.093925] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:39.619 [2024-12-03 10:44:10.096425] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 237.620 ms, result 0 00:16:39.619 [2024-12-03 10:44:10.097702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:39.619 [2024-12-03 10:44:10.108633] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:39.878  [2024-12-03T10:44:10.491Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-12-03 10:44:10.474559] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:39.878 [2024-12-03 10:44:10.481007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.878 [2024-12-03 10:44:10.481037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:39.878 [2024-12-03 10:44:10.481046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:39.878 [2024-12-03 10:44:10.481060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.878 [2024-12-03 10:44:10.481077] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:39.878 [2024-12-03 10:44:10.483235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.878 [2024-12-03 10:44:10.483257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:39.878 [2024-12-03 10:44:10.483265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.148 ms 00:16:39.878 [2024-12-03 10:44:10.483271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.878 [2024-12-03 10:44:10.485777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.878 [2024-12-03 10:44:10.485801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:39.878 [2024-12-03 10:44:10.485809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.487 ms 00:16:39.878 [2024-12-03 10:44:10.485818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.878 [2024-12-03 10:44:10.489266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.878 [2024-12-03 10:44:10.489287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:39.878 [2024-12-03 10:44:10.489294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.434 ms 00:16:39.878 [2024-12-03 10:44:10.489300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.150 [2024-12-03 10:44:10.494456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.150 [2024-12-03 10:44:10.494477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:40.150 [2024-12-03 10:44:10.494484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.134 ms 00:16:40.150 [2024-12-03 10:44:10.494494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.151 [2024-12-03 10:44:10.512752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.151 [2024-12-03 10:44:10.512777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:40.151 [2024-12-03 10:44:10.512785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.215 ms 00:16:40.151 [2024-12-03 10:44:10.512791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.151 [2024-12-03 10:44:10.525338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.151 [2024-12-03 10:44:10.525363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:40.151 [2024-12-03 10:44:10.525372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.512 ms 00:16:40.151 [2024-12-03 10:44:10.525378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.151 [2024-12-03 10:44:10.525480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.151 [2024-12-03 10:44:10.525488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:40.151 [2024-12-03 10:44:10.525494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:40.151 [2024-12-03 10:44:10.525500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.151 [2024-12-03 10:44:10.544352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.151 [2024-12-03 10:44:10.544376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:40.151 [2024-12-03 10:44:10.544384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.840 ms 00:16:40.151 [2024-12-03 10:44:10.544389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.151 [2024-12-03 10:44:10.562466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.151 [2024-12-03 10:44:10.562490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:40.151 [2024-12-03 10:44:10.562498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.043 ms 00:16:40.151 [2024-12-03 10:44:10.562503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.151 [2024-12-03 10:44:10.580127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.151 [2024-12-03 10:44:10.580152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:40.151 [2024-12-03 10:44:10.580159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.591 ms 00:16:40.151 [2024-12-03 10:44:10.580164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.151 [2024-12-03 10:44:10.598238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.151 [2024-12-03 10:44:10.598262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:40.151 [2024-12-03 10:44:10.598269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.022 ms 00:16:40.151 [2024-12-03 10:44:10.598275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.151 [2024-12-03 10:44:10.598309] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:40.151 [2024-12-03 10:44:10.598321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:40.151 [2024-12-03 10:44:10.598689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:40.152 [2024-12-03 10:44:10.598905] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:40.152 [2024-12-03 10:44:10.598912] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d105f200-e666-43cc-861e-bc4bc7b990de 00:16:40.152 [2024-12-03 10:44:10.598919] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:40.152 [2024-12-03 10:44:10.598924] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:40.152 [2024-12-03 10:44:10.598930] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:40.152 [2024-12-03 10:44:10.598936] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:40.152 [2024-12-03 10:44:10.598944] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:40.152 [2024-12-03 10:44:10.598949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:40.152 [2024-12-03 10:44:10.598955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:40.152 [2024-12-03 10:44:10.598960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:40.152 [2024-12-03 10:44:10.598965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:40.152 [2024-12-03 10:44:10.598970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.152 [2024-12-03 10:44:10.598975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:40.152 [2024-12-03 10:44:10.598981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:16:40.152 [2024-12-03 10:44:10.598987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.152 [2024-12-03 10:44:10.608336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.152 [2024-12-03 10:44:10.608359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:40.152 [2024-12-03 10:44:10.608370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.336 ms 00:16:40.152 [2024-12-03 10:44:10.608376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.152 [2024-12-03 10:44:10.608549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.152 [2024-12-03 10:44:10.608557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:40.152 [2024-12-03 10:44:10.608563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:16:40.152 [2024-12-03 10:44:10.608569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.152 [2024-12-03 10:44:10.640004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.152 [2024-12-03 10:44:10.640030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:40.152 [2024-12-03 10:44:10.640040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.152 [2024-12-03 10:44:10.640046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.152 [2024-12-03 10:44:10.640118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.152 [2024-12-03 10:44:10.640126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:40.152 [2024-12-03 10:44:10.640133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.152 [2024-12-03 10:44:10.640138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.152 [2024-12-03 10:44:10.640172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.152 [2024-12-03 10:44:10.640180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:40.152 [2024-12-03 10:44:10.640186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.152 [2024-12-03 10:44:10.640195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.152 [2024-12-03 10:44:10.640209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.152 [2024-12-03 10:44:10.640215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:40.152 [2024-12-03 10:44:10.640220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.152 [2024-12-03 10:44:10.640226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.152 [2024-12-03 10:44:10.701285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.152 [2024-12-03 10:44:10.701319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:40.152 [2024-12-03 10:44:10.701331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.152 [2024-12-03 10:44:10.701338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.152 [2024-12-03 10:44:10.725548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.152 [2024-12-03 10:44:10.725574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:40.152 [2024-12-03 10:44:10.725583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.152 [2024-12-03 10:44:10.725589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.152 [2024-12-03 10:44:10.725629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.152 [2024-12-03 10:44:10.725637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:40.152 [2024-12-03 10:44:10.725643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.152 [2024-12-03 10:44:10.725650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.152 [2024-12-03 10:44:10.725677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.152 [2024-12-03 10:44:10.725683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:40.152 [2024-12-03 10:44:10.725690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.152 [2024-12-03 10:44:10.725696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.152 [2024-12-03 10:44:10.725772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.152 [2024-12-03 10:44:10.725782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:40.152 [2024-12-03 10:44:10.725789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.152 [2024-12-03 10:44:10.725795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.152 [2024-12-03 10:44:10.725821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.152 [2024-12-03 10:44:10.725829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:40.152 [2024-12-03 10:44:10.725836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.152 [2024-12-03 10:44:10.725843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.152 [2024-12-03 10:44:10.725877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.152 [2024-12-03 10:44:10.725884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:40.152 [2024-12-03 10:44:10.725890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.152 [2024-12-03 10:44:10.725896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.152 [2024-12-03 10:44:10.725938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.153 [2024-12-03 10:44:10.725947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:40.153 [2024-12-03 10:44:10.725954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.153 [2024-12-03 10:44:10.725960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.153 [2024-12-03 10:44:10.726102] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 245.070 ms, result 0 00:16:41.134 00:16:41.134 00:16:41.134 10:44:11 -- ftl/trim.sh@93 -- # svcpid=72670 00:16:41.134 10:44:11 -- ftl/trim.sh@94 -- # waitforlisten 72670 00:16:41.134 10:44:11 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:41.134 10:44:11 -- common/autotest_common.sh@829 -- # '[' -z 72670 ']' 00:16:41.134 10:44:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.134 10:44:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.134 10:44:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.134 10:44:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.134 10:44:11 -- common/autotest_common.sh@10 -- # set +x 00:16:41.134 [2024-12-03 10:44:11.496551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:41.134 [2024-12-03 10:44:11.496646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72670 ] 00:16:41.134 [2024-12-03 10:44:11.638243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.393 [2024-12-03 10:44:11.805124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:41.393 [2024-12-03 10:44:11.805300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.961 10:44:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.961 10:44:12 -- common/autotest_common.sh@862 -- # return 0 00:16:41.961 10:44:12 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:41.961 [2024-12-03 10:44:12.505610] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:41.961 [2024-12-03 10:44:12.505662] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:42.221 [2024-12-03 10:44:12.670410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.221 [2024-12-03 10:44:12.670444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:42.221 [2024-12-03 10:44:12.670457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:42.221 [2024-12-03 10:44:12.670463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.221 [2024-12-03 10:44:12.672718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.221 [2024-12-03 10:44:12.672747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:42.221 [2024-12-03 10:44:12.672756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.239 ms 00:16:42.221 [2024-12-03 10:44:12.672762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.221 [2024-12-03 10:44:12.672821] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:42.221 [2024-12-03 10:44:12.673448] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:42.221 [2024-12-03 10:44:12.673473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.221 [2024-12-03 10:44:12.673479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:42.221 [2024-12-03 10:44:12.673487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:16:42.221 [2024-12-03 10:44:12.673493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.221 [2024-12-03 10:44:12.675247] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:42.221 [2024-12-03 10:44:12.686009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.221 [2024-12-03 10:44:12.686040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:42.221 [2024-12-03 10:44:12.686050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.767 ms 00:16:42.221 [2024-12-03 10:44:12.686065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.221 [2024-12-03 10:44:12.686141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.221 [2024-12-03 10:44:12.686151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:42.221 [2024-12-03 10:44:12.686158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:42.221 [2024-12-03 10:44:12.686165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.221 [2024-12-03 10:44:12.692338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.221 [2024-12-03 10:44:12.692366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:42.221 [2024-12-03 10:44:12.692373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.134 ms 00:16:42.221 [2024-12-03 10:44:12.692380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.221 [2024-12-03 10:44:12.692452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.221 [2024-12-03 10:44:12.692462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:42.221 [2024-12-03 10:44:12.692468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:42.221 [2024-12-03 10:44:12.692476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.221 [2024-12-03 10:44:12.692499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.221 [2024-12-03 10:44:12.692507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:42.221 [2024-12-03 10:44:12.692513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:42.221 [2024-12-03 10:44:12.692521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.221 [2024-12-03 10:44:12.692545] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:42.221 [2024-12-03 10:44:12.695698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.221 [2024-12-03 10:44:12.695720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:42.221 [2024-12-03 10:44:12.695729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.161 ms 00:16:42.221 [2024-12-03 10:44:12.695735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.221 [2024-12-03 10:44:12.695768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.221 [2024-12-03 10:44:12.695774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:42.221 [2024-12-03 10:44:12.695783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:42.221 [2024-12-03 10:44:12.695791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.221 [2024-12-03 10:44:12.695808] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:42.221 [2024-12-03 10:44:12.695824] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:42.221 [2024-12-03 10:44:12.695853] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:42.221 [2024-12-03 10:44:12.695866] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:42.221 [2024-12-03 10:44:12.695927] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:42.221 [2024-12-03 10:44:12.695935] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:42.221 [2024-12-03 10:44:12.695949] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:42.221 [2024-12-03 10:44:12.695957] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:42.221 [2024-12-03 10:44:12.695965] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:42.222 [2024-12-03 10:44:12.695971] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:42.222 [2024-12-03 10:44:12.695978] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:42.222 [2024-12-03 10:44:12.695984] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:42.222 [2024-12-03 10:44:12.695993] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:42.222 [2024-12-03 10:44:12.695999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.222 [2024-12-03 10:44:12.696006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:42.222 [2024-12-03 10:44:12.696011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:16:42.222 [2024-12-03 10:44:12.696018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.222 [2024-12-03 10:44:12.696078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.222 [2024-12-03 10:44:12.696086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:42.222 [2024-12-03 10:44:12.696092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:42.222 [2024-12-03 10:44:12.696099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.222 [2024-12-03 10:44:12.696162] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:42.222 [2024-12-03 10:44:12.696171] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:42.222 [2024-12-03 10:44:12.696177] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.222 [2024-12-03 10:44:12.696185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.222 [2024-12-03 10:44:12.696191] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:42.222 [2024-12-03 10:44:12.696197] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:42.222 [2024-12-03 10:44:12.696202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:42.222 [2024-12-03 10:44:12.696213] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:42.222 [2024-12-03 10:44:12.696218] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:42.222 [2024-12-03 10:44:12.696225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.222 [2024-12-03 10:44:12.696230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:42.222 [2024-12-03 10:44:12.696237] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:42.222 [2024-12-03 10:44:12.696242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.222 [2024-12-03 10:44:12.696248] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:42.222 [2024-12-03 10:44:12.696254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:42.222 [2024-12-03 10:44:12.696262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.222 [2024-12-03 10:44:12.696267] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:42.222 [2024-12-03 10:44:12.696273] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:42.222 [2024-12-03 10:44:12.696278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.222 [2024-12-03 10:44:12.696284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:42.222 [2024-12-03 10:44:12.696289] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:42.222 [2024-12-03 10:44:12.696296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:42.222 [2024-12-03 10:44:12.696301] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:42.222 [2024-12-03 10:44:12.696309] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:42.222 [2024-12-03 10:44:12.696313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:42.222 [2024-12-03 10:44:12.696325] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:42.222 [2024-12-03 10:44:12.696330] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:42.222 [2024-12-03 10:44:12.696336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:42.222 [2024-12-03 10:44:12.696341] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:42.222 [2024-12-03 10:44:12.696347] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:42.222 [2024-12-03 10:44:12.696352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:42.222 [2024-12-03 10:44:12.696360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:42.222 [2024-12-03 10:44:12.696365] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:42.222 [2024-12-03 10:44:12.696372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:42.222 [2024-12-03 10:44:12.696376] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:42.222 [2024-12-03 10:44:12.696383] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:42.222 [2024-12-03 10:44:12.696388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.222 [2024-12-03 10:44:12.696394] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:42.222 [2024-12-03 10:44:12.696399] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:42.222 [2024-12-03 10:44:12.696409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.222 [2024-12-03 10:44:12.696414] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:42.222 [2024-12-03 10:44:12.696424] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:42.222 [2024-12-03 10:44:12.696430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.222 [2024-12-03 10:44:12.696437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.222 [2024-12-03 10:44:12.696443] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:42.222 [2024-12-03 10:44:12.696449] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:42.222 [2024-12-03 10:44:12.696454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:42.222 [2024-12-03 10:44:12.696461] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:42.222 [2024-12-03 10:44:12.696466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:42.222 [2024-12-03 10:44:12.696472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:42.222 [2024-12-03 10:44:12.696478] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:42.222 [2024-12-03 10:44:12.696486] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.222 [2024-12-03 10:44:12.696492] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:42.222 [2024-12-03 10:44:12.696499] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:42.222 [2024-12-03 10:44:12.696504] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:42.222 [2024-12-03 10:44:12.696513] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:42.222 [2024-12-03 10:44:12.696519] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:42.222 [2024-12-03 10:44:12.696526] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:42.222 [2024-12-03 10:44:12.696534] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:42.222 [2024-12-03 10:44:12.696540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:42.222 [2024-12-03 10:44:12.696545] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:42.222 [2024-12-03 10:44:12.696552] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:42.222 [2024-12-03 10:44:12.696557] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:42.222 [2024-12-03 10:44:12.696564] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:42.222 [2024-12-03 10:44:12.696569] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:42.222 [2024-12-03 10:44:12.696576] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:42.222 [2024-12-03 10:44:12.696582] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.222 [2024-12-03 10:44:12.696589] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:42.222 [2024-12-03 10:44:12.696595] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:42.222 [2024-12-03 10:44:12.696602] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:42.222 [2024-12-03 10:44:12.696607] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:42.222 [2024-12-03 10:44:12.696616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.222 [2024-12-03 10:44:12.696623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:42.222 [2024-12-03 10:44:12.696630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:16:42.222 [2024-12-03 10:44:12.696636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.222 [2024-12-03 10:44:12.710459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.222 [2024-12-03 10:44:12.710483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:42.222 [2024-12-03 10:44:12.710496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.775 ms 00:16:42.222 [2024-12-03 10:44:12.710505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.222 [2024-12-03 10:44:12.710596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.222 [2024-12-03 10:44:12.710605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:42.223 [2024-12-03 10:44:12.710613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:42.223 [2024-12-03 10:44:12.710620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.223 [2024-12-03 10:44:12.737384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.223 [2024-12-03 10:44:12.737409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:42.223 [2024-12-03 10:44:12.737420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.746 ms 00:16:42.223 [2024-12-03 10:44:12.737427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.223 [2024-12-03 10:44:12.737473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.223 [2024-12-03 10:44:12.737483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:42.223 [2024-12-03 10:44:12.737491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:42.223 [2024-12-03 10:44:12.737497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.223 [2024-12-03 10:44:12.737883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.223 [2024-12-03 10:44:12.737901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:42.223 [2024-12-03 10:44:12.737911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:16:42.223 [2024-12-03 10:44:12.737917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.223 [2024-12-03 10:44:12.738016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.223 [2024-12-03 10:44:12.738023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:42.223 [2024-12-03 10:44:12.738033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:16:42.223 [2024-12-03 10:44:12.738039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.223 [2024-12-03 10:44:12.751767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.223 [2024-12-03 10:44:12.751790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:42.223 [2024-12-03 10:44:12.751801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.699 ms 00:16:42.223 [2024-12-03 10:44:12.751808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.223 [2024-12-03 10:44:12.762578] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:42.223 [2024-12-03 10:44:12.762604] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:42.223 [2024-12-03 10:44:12.762614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.223 [2024-12-03 10:44:12.762621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:42.223 [2024-12-03 10:44:12.762629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.725 ms 00:16:42.223 [2024-12-03 10:44:12.762635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.223 [2024-12-03 10:44:12.781774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.223 [2024-12-03 10:44:12.781806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:42.223 [2024-12-03 10:44:12.781818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.083 ms 00:16:42.223 [2024-12-03 10:44:12.781825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.223 [2024-12-03 10:44:12.791354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.223 [2024-12-03 10:44:12.791385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:42.223 [2024-12-03 10:44:12.791395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.468 ms 00:16:42.223 [2024-12-03 10:44:12.791401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.223 [2024-12-03 10:44:12.800081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.223 [2024-12-03 10:44:12.800105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:42.223 [2024-12-03 10:44:12.800117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.637 ms 00:16:42.223 [2024-12-03 10:44:12.800122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.223 [2024-12-03 10:44:12.800398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.223 [2024-12-03 10:44:12.800407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:42.223 [2024-12-03 10:44:12.800418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:16:42.223 [2024-12-03 10:44:12.800423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.482 [2024-12-03 10:44:12.848997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.482 [2024-12-03 10:44:12.849028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:42.482 [2024-12-03 10:44:12.849041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.554 ms 00:16:42.482 [2024-12-03 10:44:12.849047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.482 [2024-12-03 10:44:12.857073] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:42.482 [2024-12-03 10:44:12.871483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.482 [2024-12-03 10:44:12.871517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:42.482 [2024-12-03 10:44:12.871528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.373 ms 00:16:42.482 [2024-12-03 10:44:12.871536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.482 [2024-12-03 10:44:12.871606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.482 [2024-12-03 10:44:12.871618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:42.482 [2024-12-03 10:44:12.871625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:42.482 [2024-12-03 10:44:12.871634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.482 [2024-12-03 10:44:12.871679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.482 [2024-12-03 10:44:12.871687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:42.482 [2024-12-03 10:44:12.871694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:42.482 [2024-12-03 10:44:12.871703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.482 [2024-12-03 10:44:12.872728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.482 [2024-12-03 10:44:12.872751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:42.482 [2024-12-03 10:44:12.872759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:16:42.482 [2024-12-03 10:44:12.872767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.482 [2024-12-03 10:44:12.872796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.482 [2024-12-03 10:44:12.872807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:42.482 [2024-12-03 10:44:12.872813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:42.482 [2024-12-03 10:44:12.872820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.482 [2024-12-03 10:44:12.872852] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:42.482 [2024-12-03 10:44:12.872863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.482 [2024-12-03 10:44:12.872870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:42.482 [2024-12-03 10:44:12.872877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:42.482 [2024-12-03 10:44:12.872883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.482 [2024-12-03 10:44:12.892012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.482 [2024-12-03 10:44:12.892039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:42.482 [2024-12-03 10:44:12.892050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.109 ms 00:16:42.482 [2024-12-03 10:44:12.892063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.482 [2024-12-03 10:44:12.892134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.482 [2024-12-03 10:44:12.892143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:42.482 [2024-12-03 10:44:12.892152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:42.482 [2024-12-03 10:44:12.892159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.482 [2024-12-03 10:44:12.892911] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:42.482 [2024-12-03 10:44:12.895374] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 222.242 ms, result 0 00:16:42.482 [2024-12-03 10:44:12.896914] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:42.482 Some configs were skipped because the RPC state that can call them passed over. 00:16:42.482 10:44:12 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:42.741 [2024-12-03 10:44:13.126679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.741 [2024-12-03 10:44:13.126713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:42.741 [2024-12-03 10:44:13.126722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.993 ms 00:16:42.741 [2024-12-03 10:44:13.126730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.741 [2024-12-03 10:44:13.126758] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 19.071 ms, result 0 00:16:42.741 true 00:16:42.741 10:44:13 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:42.741 [2024-12-03 10:44:13.333424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.741 [2024-12-03 10:44:13.333452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:42.741 [2024-12-03 10:44:13.333462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.465 ms 00:16:42.741 [2024-12-03 10:44:13.333469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.741 [2024-12-03 10:44:13.333498] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.536 ms, result 0 00:16:42.741 true 00:16:42.741 10:44:13 -- ftl/trim.sh@102 -- # killprocess 72670 00:16:42.741 10:44:13 -- common/autotest_common.sh@936 -- # '[' -z 72670 ']' 00:16:42.741 10:44:13 -- common/autotest_common.sh@940 -- # kill -0 72670 00:16:42.741 10:44:13 -- common/autotest_common.sh@941 -- # uname 00:16:43.000 10:44:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:43.000 10:44:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72670 00:16:43.000 killing process with pid 72670 00:16:43.000 10:44:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:43.000 10:44:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:43.000 10:44:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72670' 00:16:43.000 10:44:13 -- common/autotest_common.sh@955 -- # kill 72670 00:16:43.001 10:44:13 -- common/autotest_common.sh@960 -- # wait 72670 00:16:43.569 [2024-12-03 10:44:13.948396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.569 [2024-12-03 10:44:13.948450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:43.569 [2024-12-03 10:44:13.948461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:43.569 [2024-12-03 10:44:13.948469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.569 [2024-12-03 10:44:13.948491] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:43.569 [2024-12-03 10:44:13.950622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.569 [2024-12-03 10:44:13.950648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:43.569 [2024-12-03 10:44:13.950661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:16:43.569 [2024-12-03 10:44:13.950667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.569 [2024-12-03 10:44:13.950914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.569 [2024-12-03 10:44:13.950922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:43.569 [2024-12-03 10:44:13.950931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:16:43.569 [2024-12-03 10:44:13.950938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.569 [2024-12-03 10:44:13.954551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.569 [2024-12-03 10:44:13.954577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:43.569 [2024-12-03 10:44:13.954588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.595 ms 00:16:43.569 [2024-12-03 10:44:13.954594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.569 [2024-12-03 10:44:13.959902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.569 [2024-12-03 10:44:13.959936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:43.569 [2024-12-03 10:44:13.959946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.276 ms 00:16:43.569 [2024-12-03 10:44:13.959952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.570 [2024-12-03 10:44:13.968255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.570 [2024-12-03 10:44:13.968279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:43.570 [2024-12-03 10:44:13.968290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.252 ms 00:16:43.570 [2024-12-03 10:44:13.968296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.570 [2024-12-03 10:44:13.975593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.570 [2024-12-03 10:44:13.975620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:43.570 [2024-12-03 10:44:13.975629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.264 ms 00:16:43.570 [2024-12-03 10:44:13.975636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.570 [2024-12-03 10:44:13.975750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.570 [2024-12-03 10:44:13.975759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:43.570 [2024-12-03 10:44:13.975768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:43.570 [2024-12-03 10:44:13.975775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.570 [2024-12-03 10:44:13.984549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.570 [2024-12-03 10:44:13.984572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:43.570 [2024-12-03 10:44:13.984581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.756 ms 00:16:43.570 [2024-12-03 10:44:13.984586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.570 [2024-12-03 10:44:13.992897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.570 [2024-12-03 10:44:13.992921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:43.570 [2024-12-03 10:44:13.992933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.279 ms 00:16:43.570 [2024-12-03 10:44:13.992938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.570 [2024-12-03 10:44:14.000447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.570 [2024-12-03 10:44:14.000470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:43.570 [2024-12-03 10:44:14.000478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.478 ms 00:16:43.570 [2024-12-03 10:44:14.000484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.570 [2024-12-03 10:44:14.008301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.570 [2024-12-03 10:44:14.008324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:43.570 [2024-12-03 10:44:14.008332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.757 ms 00:16:43.570 [2024-12-03 10:44:14.008338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.570 [2024-12-03 10:44:14.008366] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:43.570 [2024-12-03 10:44:14.008377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:43.570 [2024-12-03 10:44:14.008806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.008999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.009006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.009016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.009023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.009029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.009037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:43.571 [2024-12-03 10:44:14.009049] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:43.571 [2024-12-03 10:44:14.009096] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d105f200-e666-43cc-861e-bc4bc7b990de 00:16:43.571 [2024-12-03 10:44:14.009103] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:43.571 [2024-12-03 10:44:14.009109] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:43.571 [2024-12-03 10:44:14.009115] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:43.571 [2024-12-03 10:44:14.009123] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:43.571 [2024-12-03 10:44:14.009129] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:43.571 [2024-12-03 10:44:14.009137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:43.571 [2024-12-03 10:44:14.009142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:43.571 [2024-12-03 10:44:14.009149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:43.571 [2024-12-03 10:44:14.009154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:43.571 [2024-12-03 10:44:14.009161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.571 [2024-12-03 10:44:14.009167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:43.571 [2024-12-03 10:44:14.009175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:16:43.571 [2024-12-03 10:44:14.009182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.571 [2024-12-03 10:44:14.019312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.571 [2024-12-03 10:44:14.019336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:43.571 [2024-12-03 10:44:14.019347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.112 ms 00:16:43.571 [2024-12-03 10:44:14.019353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.571 [2024-12-03 10:44:14.019534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.571 [2024-12-03 10:44:14.019543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:43.571 [2024-12-03 10:44:14.019563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:16:43.571 [2024-12-03 10:44:14.019568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.571 [2024-12-03 10:44:14.056733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.571 [2024-12-03 10:44:14.056760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:43.571 [2024-12-03 10:44:14.056770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.571 [2024-12-03 10:44:14.056777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.571 [2024-12-03 10:44:14.056846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.571 [2024-12-03 10:44:14.056854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:43.571 [2024-12-03 10:44:14.056864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.571 [2024-12-03 10:44:14.056870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.571 [2024-12-03 10:44:14.056907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.571 [2024-12-03 10:44:14.056914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:43.571 [2024-12-03 10:44:14.056925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.571 [2024-12-03 10:44:14.056931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.571 [2024-12-03 10:44:14.056947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.571 [2024-12-03 10:44:14.056953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:43.571 [2024-12-03 10:44:14.056960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.571 [2024-12-03 10:44:14.056967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.571 [2024-12-03 10:44:14.119883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.571 [2024-12-03 10:44:14.119916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:43.571 [2024-12-03 10:44:14.119926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.571 [2024-12-03 10:44:14.119932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.571 [2024-12-03 10:44:14.143790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.571 [2024-12-03 10:44:14.143819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:43.571 [2024-12-03 10:44:14.143829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.571 [2024-12-03 10:44:14.143838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.571 [2024-12-03 10:44:14.143885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.571 [2024-12-03 10:44:14.143892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:43.571 [2024-12-03 10:44:14.143901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.571 [2024-12-03 10:44:14.143907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.571 [2024-12-03 10:44:14.143936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.571 [2024-12-03 10:44:14.143942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:43.571 [2024-12-03 10:44:14.143950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.571 [2024-12-03 10:44:14.143955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.571 [2024-12-03 10:44:14.144033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.571 [2024-12-03 10:44:14.144042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:43.571 [2024-12-03 10:44:14.144050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.571 [2024-12-03 10:44:14.144071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.571 [2024-12-03 10:44:14.144100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.571 [2024-12-03 10:44:14.144107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:43.571 [2024-12-03 10:44:14.144115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.571 [2024-12-03 10:44:14.144121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.571 [2024-12-03 10:44:14.144156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.571 [2024-12-03 10:44:14.144163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:43.571 [2024-12-03 10:44:14.144173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.571 [2024-12-03 10:44:14.144180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.571 [2024-12-03 10:44:14.144222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.571 [2024-12-03 10:44:14.144229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:43.572 [2024-12-03 10:44:14.144236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.572 [2024-12-03 10:44:14.144242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.572 [2024-12-03 10:44:14.144361] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 195.948 ms, result 0 00:16:44.507 10:44:14 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:44.507 [2024-12-03 10:44:14.895655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:44.507 [2024-12-03 10:44:14.895771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72724 ] 00:16:44.507 [2024-12-03 10:44:15.042957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.765 [2024-12-03 10:44:15.210123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.025 [2024-12-03 10:44:15.435452] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:45.025 [2024-12-03 10:44:15.435511] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:45.025 [2024-12-03 10:44:15.584313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.025 [2024-12-03 10:44:15.584354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:45.025 [2024-12-03 10:44:15.584365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:45.025 [2024-12-03 10:44:15.584371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.025 [2024-12-03 10:44:15.586493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.025 [2024-12-03 10:44:15.586523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:45.025 [2024-12-03 10:44:15.586531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.110 ms 00:16:45.025 [2024-12-03 10:44:15.586537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.025 [2024-12-03 10:44:15.586593] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:45.025 [2024-12-03 10:44:15.587166] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:45.025 [2024-12-03 10:44:15.587179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.025 [2024-12-03 10:44:15.587185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:45.025 [2024-12-03 10:44:15.587192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:16:45.025 [2024-12-03 10:44:15.587197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.025 [2024-12-03 10:44:15.588818] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:45.025 [2024-12-03 10:44:15.599429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.025 [2024-12-03 10:44:15.599457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:45.025 [2024-12-03 10:44:15.599468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.612 ms 00:16:45.025 [2024-12-03 10:44:15.599474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.025 [2024-12-03 10:44:15.599545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.025 [2024-12-03 10:44:15.599563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:45.025 [2024-12-03 10:44:15.599570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:45.025 [2024-12-03 10:44:15.599576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.025 [2024-12-03 10:44:15.605730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.025 [2024-12-03 10:44:15.605753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:45.026 [2024-12-03 10:44:15.605761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.119 ms 00:16:45.026 [2024-12-03 10:44:15.605770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.026 [2024-12-03 10:44:15.605848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.026 [2024-12-03 10:44:15.605856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:45.026 [2024-12-03 10:44:15.605863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:45.026 [2024-12-03 10:44:15.605869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.026 [2024-12-03 10:44:15.605889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.026 [2024-12-03 10:44:15.605895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:45.026 [2024-12-03 10:44:15.605902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:45.026 [2024-12-03 10:44:15.605907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.026 [2024-12-03 10:44:15.605936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:45.026 [2024-12-03 10:44:15.609135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.026 [2024-12-03 10:44:15.609154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:45.026 [2024-12-03 10:44:15.609161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:16:45.026 [2024-12-03 10:44:15.609169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.026 [2024-12-03 10:44:15.609201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.026 [2024-12-03 10:44:15.609208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:45.026 [2024-12-03 10:44:15.609214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:45.026 [2024-12-03 10:44:15.609219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.026 [2024-12-03 10:44:15.609234] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:45.026 [2024-12-03 10:44:15.609250] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:45.026 [2024-12-03 10:44:15.609277] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:45.026 [2024-12-03 10:44:15.609291] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:45.026 [2024-12-03 10:44:15.609349] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:45.026 [2024-12-03 10:44:15.609357] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:45.026 [2024-12-03 10:44:15.609365] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:45.026 [2024-12-03 10:44:15.609373] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:45.026 [2024-12-03 10:44:15.609380] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:45.026 [2024-12-03 10:44:15.609386] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:45.026 [2024-12-03 10:44:15.609392] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:45.026 [2024-12-03 10:44:15.609398] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:45.026 [2024-12-03 10:44:15.609407] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:45.026 [2024-12-03 10:44:15.609413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.026 [2024-12-03 10:44:15.609419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:45.026 [2024-12-03 10:44:15.609425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:16:45.026 [2024-12-03 10:44:15.609431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.026 [2024-12-03 10:44:15.609482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.026 [2024-12-03 10:44:15.609488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:45.026 [2024-12-03 10:44:15.609495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:45.026 [2024-12-03 10:44:15.609500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.026 [2024-12-03 10:44:15.609567] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:45.026 [2024-12-03 10:44:15.609576] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:45.026 [2024-12-03 10:44:15.609582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:45.026 [2024-12-03 10:44:15.609589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.026 [2024-12-03 10:44:15.609595] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:45.026 [2024-12-03 10:44:15.609601] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:45.026 [2024-12-03 10:44:15.609606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:45.026 [2024-12-03 10:44:15.609610] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:45.026 [2024-12-03 10:44:15.609616] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:45.026 [2024-12-03 10:44:15.609620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:45.026 [2024-12-03 10:44:15.609626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:45.026 [2024-12-03 10:44:15.609630] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:45.026 [2024-12-03 10:44:15.609635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:45.026 [2024-12-03 10:44:15.609640] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:45.026 [2024-12-03 10:44:15.609650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:45.026 [2024-12-03 10:44:15.609655] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.026 [2024-12-03 10:44:15.609660] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:45.026 [2024-12-03 10:44:15.609665] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:45.026 [2024-12-03 10:44:15.609670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.026 [2024-12-03 10:44:15.609675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:45.026 [2024-12-03 10:44:15.609680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:45.026 [2024-12-03 10:44:15.609685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:45.026 [2024-12-03 10:44:15.609691] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:45.026 [2024-12-03 10:44:15.609696] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:45.026 [2024-12-03 10:44:15.609701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.026 [2024-12-03 10:44:15.609706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:45.026 [2024-12-03 10:44:15.609711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:45.026 [2024-12-03 10:44:15.609716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.026 [2024-12-03 10:44:15.609721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:45.026 [2024-12-03 10:44:15.609725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:45.026 [2024-12-03 10:44:15.609730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.026 [2024-12-03 10:44:15.609735] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:45.026 [2024-12-03 10:44:15.609740] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:45.026 [2024-12-03 10:44:15.609745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.026 [2024-12-03 10:44:15.609750] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:45.026 [2024-12-03 10:44:15.609755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:45.026 [2024-12-03 10:44:15.609759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:45.026 [2024-12-03 10:44:15.609765] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:45.026 [2024-12-03 10:44:15.609770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:45.026 [2024-12-03 10:44:15.609775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:45.026 [2024-12-03 10:44:15.609779] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:45.026 [2024-12-03 10:44:15.609785] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:45.026 [2024-12-03 10:44:15.609790] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:45.026 [2024-12-03 10:44:15.609798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.026 [2024-12-03 10:44:15.609804] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:45.026 [2024-12-03 10:44:15.609809] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:45.026 [2024-12-03 10:44:15.609814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:45.026 [2024-12-03 10:44:15.609819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:45.026 [2024-12-03 10:44:15.609824] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:45.026 [2024-12-03 10:44:15.609829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:45.026 [2024-12-03 10:44:15.609835] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:45.026 [2024-12-03 10:44:15.609842] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:45.026 [2024-12-03 10:44:15.609848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:45.026 [2024-12-03 10:44:15.609854] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:45.026 [2024-12-03 10:44:15.609860] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:45.026 [2024-12-03 10:44:15.609865] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:45.026 [2024-12-03 10:44:15.609871] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:45.026 [2024-12-03 10:44:15.609876] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:45.026 [2024-12-03 10:44:15.609882] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:45.026 [2024-12-03 10:44:15.609888] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:45.026 [2024-12-03 10:44:15.609893] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:45.026 [2024-12-03 10:44:15.609898] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:45.027 [2024-12-03 10:44:15.609904] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:45.027 [2024-12-03 10:44:15.609909] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:45.027 [2024-12-03 10:44:15.609915] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:45.027 [2024-12-03 10:44:15.609920] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:45.027 [2024-12-03 10:44:15.609931] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:45.027 [2024-12-03 10:44:15.609937] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:45.027 [2024-12-03 10:44:15.609942] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:45.027 [2024-12-03 10:44:15.609947] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:45.027 [2024-12-03 10:44:15.609952] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:45.027 [2024-12-03 10:44:15.609958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.027 [2024-12-03 10:44:15.609964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:45.027 [2024-12-03 10:44:15.609978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:16:45.027 [2024-12-03 10:44:15.609984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.027 [2024-12-03 10:44:15.623805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.027 [2024-12-03 10:44:15.623832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:45.027 [2024-12-03 10:44:15.623842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.779 ms 00:16:45.027 [2024-12-03 10:44:15.623849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.027 [2024-12-03 10:44:15.623941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.027 [2024-12-03 10:44:15.623950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:45.027 [2024-12-03 10:44:15.623958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:45.027 [2024-12-03 10:44:15.623965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.287 [2024-12-03 10:44:15.666487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.287 [2024-12-03 10:44:15.666517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:45.287 [2024-12-03 10:44:15.666528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.504 ms 00:16:45.287 [2024-12-03 10:44:15.666535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.287 [2024-12-03 10:44:15.666593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.287 [2024-12-03 10:44:15.666603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:45.287 [2024-12-03 10:44:15.666613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:45.287 [2024-12-03 10:44:15.666619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.287 [2024-12-03 10:44:15.667000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.287 [2024-12-03 10:44:15.667023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:45.287 [2024-12-03 10:44:15.667030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:16:45.287 [2024-12-03 10:44:15.667038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.287 [2024-12-03 10:44:15.667151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.287 [2024-12-03 10:44:15.667160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:45.287 [2024-12-03 10:44:15.667167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:16:45.287 [2024-12-03 10:44:15.667172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.287 [2024-12-03 10:44:15.680124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.287 [2024-12-03 10:44:15.680147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:45.287 [2024-12-03 10:44:15.680156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.932 ms 00:16:45.287 [2024-12-03 10:44:15.680165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.287 [2024-12-03 10:44:15.691102] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:45.287 [2024-12-03 10:44:15.691128] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:45.287 [2024-12-03 10:44:15.691138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.287 [2024-12-03 10:44:15.691145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:45.287 [2024-12-03 10:44:15.691152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.894 ms 00:16:45.287 [2024-12-03 10:44:15.691159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.287 [2024-12-03 10:44:15.724945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.287 [2024-12-03 10:44:15.724989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:45.287 [2024-12-03 10:44:15.725001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.730 ms 00:16:45.287 [2024-12-03 10:44:15.725009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.287 [2024-12-03 10:44:15.737197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.287 [2024-12-03 10:44:15.737229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:45.287 [2024-12-03 10:44:15.737247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.110 ms 00:16:45.287 [2024-12-03 10:44:15.737255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.287 [2024-12-03 10:44:15.748927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.287 [2024-12-03 10:44:15.748958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:45.287 [2024-12-03 10:44:15.748969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.611 ms 00:16:45.287 [2024-12-03 10:44:15.748976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.287 [2024-12-03 10:44:15.749353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.287 [2024-12-03 10:44:15.749432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:45.287 [2024-12-03 10:44:15.749441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:16:45.287 [2024-12-03 10:44:15.749452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.287 [2024-12-03 10:44:15.810839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.287 [2024-12-03 10:44:15.810876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:45.287 [2024-12-03 10:44:15.810888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.365 ms 00:16:45.287 [2024-12-03 10:44:15.810900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.287 [2024-12-03 10:44:15.821861] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:45.287 [2024-12-03 10:44:15.838790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.287 [2024-12-03 10:44:15.838828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:45.288 [2024-12-03 10:44:15.838839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.818 ms 00:16:45.288 [2024-12-03 10:44:15.838847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.288 [2024-12-03 10:44:15.838923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.288 [2024-12-03 10:44:15.838933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:45.288 [2024-12-03 10:44:15.838945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:45.288 [2024-12-03 10:44:15.838953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.288 [2024-12-03 10:44:15.839005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.288 [2024-12-03 10:44:15.839014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:45.288 [2024-12-03 10:44:15.839023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:45.288 [2024-12-03 10:44:15.839030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.288 [2024-12-03 10:44:15.840333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.288 [2024-12-03 10:44:15.840362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:45.288 [2024-12-03 10:44:15.840373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:16:45.288 [2024-12-03 10:44:15.840381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.288 [2024-12-03 10:44:15.840412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.288 [2024-12-03 10:44:15.840423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:45.288 [2024-12-03 10:44:15.840432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:45.288 [2024-12-03 10:44:15.840441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.288 [2024-12-03 10:44:15.840475] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:45.288 [2024-12-03 10:44:15.840485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.288 [2024-12-03 10:44:15.840492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:45.288 [2024-12-03 10:44:15.840501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:45.288 [2024-12-03 10:44:15.840512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.288 [2024-12-03 10:44:15.865084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.288 [2024-12-03 10:44:15.865116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:45.288 [2024-12-03 10:44:15.865128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.549 ms 00:16:45.288 [2024-12-03 10:44:15.865136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.288 [2024-12-03 10:44:15.865222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.288 [2024-12-03 10:44:15.865233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:45.288 [2024-12-03 10:44:15.865242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:45.288 [2024-12-03 10:44:15.865250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.288 [2024-12-03 10:44:15.866151] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:45.288 [2024-12-03 10:44:15.869307] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 281.501 ms, result 0 00:16:45.288 [2024-12-03 10:44:15.870275] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:45.288 [2024-12-03 10:44:15.883703] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:46.667  [2024-12-03T10:44:18.216Z] Copying: 14/256 [MB] (14 MBps) [2024-12-03T10:44:19.151Z] Copying: 26/256 [MB] (11 MBps) [2024-12-03T10:44:20.088Z] Copying: 38/256 [MB] (11 MBps) [2024-12-03T10:44:21.030Z] Copying: 51/256 [MB] (12 MBps) [2024-12-03T10:44:21.963Z] Copying: 61/256 [MB] (10 MBps) [2024-12-03T10:44:23.341Z] Copying: 73/256 [MB] (11 MBps) [2024-12-03T10:44:24.275Z] Copying: 84/256 [MB] (11 MBps) [2024-12-03T10:44:25.212Z] Copying: 95/256 [MB] (11 MBps) [2024-12-03T10:44:26.149Z] Copying: 107/256 [MB] (11 MBps) [2024-12-03T10:44:27.086Z] Copying: 119/256 [MB] (11 MBps) [2024-12-03T10:44:28.022Z] Copying: 130/256 [MB] (11 MBps) [2024-12-03T10:44:28.964Z] Copying: 142/256 [MB] (11 MBps) [2024-12-03T10:44:30.352Z] Copying: 153/256 [MB] (10 MBps) [2024-12-03T10:44:31.288Z] Copying: 163/256 [MB] (10 MBps) [2024-12-03T10:44:32.226Z] Copying: 174/256 [MB] (10 MBps) [2024-12-03T10:44:33.164Z] Copying: 185/256 [MB] (11 MBps) [2024-12-03T10:44:34.095Z] Copying: 197/256 [MB] (11 MBps) [2024-12-03T10:44:35.027Z] Copying: 209/256 [MB] (11 MBps) [2024-12-03T10:44:35.957Z] Copying: 220/256 [MB] (11 MBps) [2024-12-03T10:44:37.338Z] Copying: 232/256 [MB] (11 MBps) [2024-12-03T10:44:38.273Z] Copying: 243/256 [MB] (11 MBps) [2024-12-03T10:44:38.273Z] Copying: 255/256 [MB] (12 MBps) [2024-12-03T10:44:38.273Z] Copying: 256/256 [MB] (average 11 MBps)[2024-12-03 10:44:38.235388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:07.660 [2024-12-03 10:44:38.243225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.660 [2024-12-03 10:44:38.243267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:07.660 [2024-12-03 10:44:38.243280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:07.660 [2024-12-03 10:44:38.243287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.660 [2024-12-03 10:44:38.243307] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:07.660 [2024-12-03 10:44:38.245505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.660 [2024-12-03 10:44:38.245533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:07.660 [2024-12-03 10:44:38.245541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.187 ms 00:17:07.660 [2024-12-03 10:44:38.245548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.660 [2024-12-03 10:44:38.245786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.660 [2024-12-03 10:44:38.245796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:07.660 [2024-12-03 10:44:38.245803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:17:07.660 [2024-12-03 10:44:38.245812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.660 [2024-12-03 10:44:38.250911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.661 [2024-12-03 10:44:38.250938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:07.661 [2024-12-03 10:44:38.250947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.085 ms 00:17:07.661 [2024-12-03 10:44:38.250954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.661 [2024-12-03 10:44:38.256132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.661 [2024-12-03 10:44:38.256156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:07.661 [2024-12-03 10:44:38.256164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.151 ms 00:17:07.661 [2024-12-03 10:44:38.256171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.921 [2024-12-03 10:44:38.275219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.921 [2024-12-03 10:44:38.275246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:07.921 [2024-12-03 10:44:38.275255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.993 ms 00:17:07.921 [2024-12-03 10:44:38.275261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.921 [2024-12-03 10:44:38.287560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.921 [2024-12-03 10:44:38.287586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:07.921 [2024-12-03 10:44:38.287594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.261 ms 00:17:07.921 [2024-12-03 10:44:38.287602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.921 [2024-12-03 10:44:38.287697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.921 [2024-12-03 10:44:38.287705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:07.921 [2024-12-03 10:44:38.287712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:07.921 [2024-12-03 10:44:38.287718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.921 [2024-12-03 10:44:38.306735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.921 [2024-12-03 10:44:38.306760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:07.921 [2024-12-03 10:44:38.306767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.003 ms 00:17:07.921 [2024-12-03 10:44:38.306773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.921 [2024-12-03 10:44:38.325264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.921 [2024-12-03 10:44:38.325288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:07.921 [2024-12-03 10:44:38.325296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.457 ms 00:17:07.921 [2024-12-03 10:44:38.325301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.921 [2024-12-03 10:44:38.343169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.921 [2024-12-03 10:44:38.343192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:07.921 [2024-12-03 10:44:38.343199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.835 ms 00:17:07.921 [2024-12-03 10:44:38.343205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.921 [2024-12-03 10:44:38.361089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.921 [2024-12-03 10:44:38.361112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:07.921 [2024-12-03 10:44:38.361120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.828 ms 00:17:07.921 [2024-12-03 10:44:38.361125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.921 [2024-12-03 10:44:38.361159] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:07.921 [2024-12-03 10:44:38.361171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:07.921 [2024-12-03 10:44:38.361377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:07.922 [2024-12-03 10:44:38.361770] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:07.922 [2024-12-03 10:44:38.361776] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d105f200-e666-43cc-861e-bc4bc7b990de 00:17:07.922 [2024-12-03 10:44:38.361783] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:07.922 [2024-12-03 10:44:38.361789] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:07.922 [2024-12-03 10:44:38.361795] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:07.922 [2024-12-03 10:44:38.361801] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:07.922 [2024-12-03 10:44:38.361807] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:07.922 [2024-12-03 10:44:38.361816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:07.922 [2024-12-03 10:44:38.361822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:07.922 [2024-12-03 10:44:38.361827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:07.922 [2024-12-03 10:44:38.361832] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:07.922 [2024-12-03 10:44:38.361839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.922 [2024-12-03 10:44:38.361845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:07.922 [2024-12-03 10:44:38.361852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:17:07.922 [2024-12-03 10:44:38.361858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.922 [2024-12-03 10:44:38.371900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.922 [2024-12-03 10:44:38.371924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:07.922 [2024-12-03 10:44:38.371936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.029 ms 00:17:07.922 [2024-12-03 10:44:38.371942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.922 [2024-12-03 10:44:38.372129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.922 [2024-12-03 10:44:38.372138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:07.922 [2024-12-03 10:44:38.372145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:17:07.922 [2024-12-03 10:44:38.372150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.922 [2024-12-03 10:44:38.403232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.922 [2024-12-03 10:44:38.403258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:07.922 [2024-12-03 10:44:38.403271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.922 [2024-12-03 10:44:38.403277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.922 [2024-12-03 10:44:38.403344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.922 [2024-12-03 10:44:38.403352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:07.922 [2024-12-03 10:44:38.403358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.922 [2024-12-03 10:44:38.403364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.923 [2024-12-03 10:44:38.403400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.923 [2024-12-03 10:44:38.403409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:07.923 [2024-12-03 10:44:38.403415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.923 [2024-12-03 10:44:38.403425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.923 [2024-12-03 10:44:38.403438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.923 [2024-12-03 10:44:38.403445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:07.923 [2024-12-03 10:44:38.403451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.923 [2024-12-03 10:44:38.403457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.923 [2024-12-03 10:44:38.463332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.923 [2024-12-03 10:44:38.463364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:07.923 [2024-12-03 10:44:38.463377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.923 [2024-12-03 10:44:38.463383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.923 [2024-12-03 10:44:38.487155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.923 [2024-12-03 10:44:38.487183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:07.923 [2024-12-03 10:44:38.487191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.923 [2024-12-03 10:44:38.487197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.923 [2024-12-03 10:44:38.487241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.923 [2024-12-03 10:44:38.487248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:07.923 [2024-12-03 10:44:38.487255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.923 [2024-12-03 10:44:38.487261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.923 [2024-12-03 10:44:38.487289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.923 [2024-12-03 10:44:38.487296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:07.923 [2024-12-03 10:44:38.487302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.923 [2024-12-03 10:44:38.487308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.923 [2024-12-03 10:44:38.487386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.923 [2024-12-03 10:44:38.487395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:07.923 [2024-12-03 10:44:38.487402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.923 [2024-12-03 10:44:38.487408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.923 [2024-12-03 10:44:38.487436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.923 [2024-12-03 10:44:38.487445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:07.923 [2024-12-03 10:44:38.487451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.923 [2024-12-03 10:44:38.487457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.923 [2024-12-03 10:44:38.487494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.923 [2024-12-03 10:44:38.487501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:07.923 [2024-12-03 10:44:38.487507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.923 [2024-12-03 10:44:38.487513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.923 [2024-12-03 10:44:38.487571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.923 [2024-12-03 10:44:38.487582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:07.923 [2024-12-03 10:44:38.487587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.923 [2024-12-03 10:44:38.487593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.923 [2024-12-03 10:44:38.487721] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 244.488 ms, result 0 00:17:08.860 00:17:08.860 00:17:08.860 10:44:39 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:09.432 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:09.432 10:44:39 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:09.432 10:44:39 -- ftl/trim.sh@109 -- # fio_kill 00:17:09.432 10:44:39 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:09.432 10:44:39 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:09.432 10:44:39 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:09.432 10:44:39 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:09.432 10:44:39 -- ftl/trim.sh@20 -- # killprocess 72670 00:17:09.432 10:44:39 -- common/autotest_common.sh@936 -- # '[' -z 72670 ']' 00:17:09.432 Process with pid 72670 is not found 00:17:09.432 10:44:39 -- common/autotest_common.sh@940 -- # kill -0 72670 00:17:09.432 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72670) - No such process 00:17:09.432 10:44:39 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72670 is not found' 00:17:09.432 ************************************ 00:17:09.432 END TEST ftl_trim 00:17:09.432 ************************************ 00:17:09.432 00:17:09.432 real 1m31.342s 00:17:09.432 user 1m52.899s 00:17:09.432 sys 0m5.007s 00:17:09.432 10:44:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:09.432 10:44:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.432 10:44:39 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:09.432 10:44:39 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:09.432 10:44:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:09.432 10:44:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.432 ************************************ 00:17:09.432 START TEST ftl_restore 00:17:09.432 ************************************ 00:17:09.432 10:44:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:09.432 * Looking for test storage... 00:17:09.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:09.432 10:44:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:09.432 10:44:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:09.432 10:44:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:09.432 10:44:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:09.432 10:44:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:09.432 10:44:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:09.432 10:44:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:09.432 10:44:40 -- scripts/common.sh@335 -- # IFS=.-: 00:17:09.432 10:44:40 -- scripts/common.sh@335 -- # read -ra ver1 00:17:09.432 10:44:40 -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.432 10:44:40 -- scripts/common.sh@336 -- # read -ra ver2 00:17:09.432 10:44:40 -- scripts/common.sh@337 -- # local 'op=<' 00:17:09.432 10:44:40 -- scripts/common.sh@339 -- # ver1_l=2 00:17:09.432 10:44:40 -- scripts/common.sh@340 -- # ver2_l=1 00:17:09.432 10:44:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:09.432 10:44:40 -- scripts/common.sh@343 -- # case "$op" in 00:17:09.432 10:44:40 -- scripts/common.sh@344 -- # : 1 00:17:09.432 10:44:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:09.432 10:44:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.432 10:44:40 -- scripts/common.sh@364 -- # decimal 1 00:17:09.432 10:44:40 -- scripts/common.sh@352 -- # local d=1 00:17:09.432 10:44:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.432 10:44:40 -- scripts/common.sh@354 -- # echo 1 00:17:09.432 10:44:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:09.432 10:44:40 -- scripts/common.sh@365 -- # decimal 2 00:17:09.432 10:44:40 -- scripts/common.sh@352 -- # local d=2 00:17:09.432 10:44:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.432 10:44:40 -- scripts/common.sh@354 -- # echo 2 00:17:09.432 10:44:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:09.432 10:44:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:09.432 10:44:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:09.432 10:44:40 -- scripts/common.sh@367 -- # return 0 00:17:09.432 10:44:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.432 10:44:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:09.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.432 --rc genhtml_branch_coverage=1 00:17:09.432 --rc genhtml_function_coverage=1 00:17:09.432 --rc genhtml_legend=1 00:17:09.432 --rc geninfo_all_blocks=1 00:17:09.432 --rc geninfo_unexecuted_blocks=1 00:17:09.432 00:17:09.432 ' 00:17:09.432 10:44:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:09.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.432 --rc genhtml_branch_coverage=1 00:17:09.432 --rc genhtml_function_coverage=1 00:17:09.432 --rc genhtml_legend=1 00:17:09.432 --rc geninfo_all_blocks=1 00:17:09.432 --rc geninfo_unexecuted_blocks=1 00:17:09.432 00:17:09.432 ' 00:17:09.432 10:44:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:09.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.432 --rc genhtml_branch_coverage=1 00:17:09.432 --rc genhtml_function_coverage=1 00:17:09.432 --rc genhtml_legend=1 00:17:09.432 --rc geninfo_all_blocks=1 00:17:09.432 --rc geninfo_unexecuted_blocks=1 00:17:09.432 00:17:09.432 ' 00:17:09.432 10:44:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:09.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.432 --rc genhtml_branch_coverage=1 00:17:09.432 --rc genhtml_function_coverage=1 00:17:09.432 --rc genhtml_legend=1 00:17:09.432 --rc geninfo_all_blocks=1 00:17:09.432 --rc geninfo_unexecuted_blocks=1 00:17:09.432 00:17:09.432 ' 00:17:09.432 10:44:40 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:09.432 10:44:40 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:09.692 10:44:40 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:09.692 10:44:40 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:09.692 10:44:40 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:09.692 10:44:40 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:09.692 10:44:40 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.692 10:44:40 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:09.692 10:44:40 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:09.692 10:44:40 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:09.692 10:44:40 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:09.692 10:44:40 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:09.692 10:44:40 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:09.692 10:44:40 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:09.692 10:44:40 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:09.692 10:44:40 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:09.692 10:44:40 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:09.692 10:44:40 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:09.692 10:44:40 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:09.692 10:44:40 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:09.692 10:44:40 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:09.692 10:44:40 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:09.692 10:44:40 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:09.692 10:44:40 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:09.692 10:44:40 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:09.692 10:44:40 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:09.692 10:44:40 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:09.692 10:44:40 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:09.692 10:44:40 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:09.692 10:44:40 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.692 10:44:40 -- ftl/restore.sh@13 -- # mktemp -d 00:17:09.692 10:44:40 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.7lxdGZ7uFz 00:17:09.692 10:44:40 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:09.692 10:44:40 -- ftl/restore.sh@16 -- # case $opt in 00:17:09.692 10:44:40 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:17:09.692 10:44:40 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:09.692 10:44:40 -- ftl/restore.sh@23 -- # shift 2 00:17:09.692 10:44:40 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:17:09.692 10:44:40 -- ftl/restore.sh@25 -- # timeout=240 00:17:09.692 10:44:40 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:09.692 10:44:40 -- ftl/restore.sh@39 -- # svcpid=73048 00:17:09.692 10:44:40 -- ftl/restore.sh@41 -- # waitforlisten 73048 00:17:09.692 10:44:40 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:09.692 10:44:40 -- common/autotest_common.sh@829 -- # '[' -z 73048 ']' 00:17:09.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.692 10:44:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.692 10:44:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.692 10:44:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.692 10:44:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.692 10:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.692 [2024-12-03 10:44:40.163693] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:09.692 [2024-12-03 10:44:40.163847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73048 ] 00:17:09.951 [2024-12-03 10:44:40.325721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.951 [2024-12-03 10:44:40.492845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:09.951 [2024-12-03 10:44:40.493031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.518 10:44:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.518 10:44:40 -- common/autotest_common.sh@862 -- # return 0 00:17:10.518 10:44:40 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:10.518 10:44:40 -- ftl/common.sh@54 -- # local name=nvme0 00:17:10.518 10:44:40 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:10.518 10:44:40 -- ftl/common.sh@56 -- # local size=103424 00:17:10.518 10:44:40 -- ftl/common.sh@59 -- # local base_bdev 00:17:10.518 10:44:40 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:10.797 10:44:41 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:10.797 10:44:41 -- ftl/common.sh@62 -- # local base_size 00:17:10.797 10:44:41 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:10.797 10:44:41 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:17:10.797 10:44:41 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:10.797 10:44:41 -- common/autotest_common.sh@1369 -- # local bs 00:17:10.797 10:44:41 -- common/autotest_common.sh@1370 -- # local nb 00:17:10.797 10:44:41 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:11.094 10:44:41 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:11.094 { 00:17:11.094 "name": "nvme0n1", 00:17:11.094 "aliases": [ 00:17:11.094 "1a5b4b66-7c63-4afc-af64-c349d25ee564" 00:17:11.094 ], 00:17:11.094 "product_name": "NVMe disk", 00:17:11.094 "block_size": 4096, 00:17:11.094 "num_blocks": 1310720, 00:17:11.094 "uuid": "1a5b4b66-7c63-4afc-af64-c349d25ee564", 00:17:11.094 "assigned_rate_limits": { 00:17:11.094 "rw_ios_per_sec": 0, 00:17:11.094 "rw_mbytes_per_sec": 0, 00:17:11.094 "r_mbytes_per_sec": 0, 00:17:11.094 "w_mbytes_per_sec": 0 00:17:11.094 }, 00:17:11.094 "claimed": true, 00:17:11.094 "claim_type": "read_many_write_one", 00:17:11.094 "zoned": false, 00:17:11.094 "supported_io_types": { 00:17:11.094 "read": true, 00:17:11.094 "write": true, 00:17:11.094 "unmap": true, 00:17:11.094 "write_zeroes": true, 00:17:11.094 "flush": true, 00:17:11.094 "reset": true, 00:17:11.094 "compare": true, 00:17:11.094 "compare_and_write": false, 00:17:11.094 "abort": true, 00:17:11.094 "nvme_admin": true, 00:17:11.094 "nvme_io": true 00:17:11.094 }, 00:17:11.094 "driver_specific": { 00:17:11.094 "nvme": [ 00:17:11.094 { 00:17:11.094 "pci_address": "0000:00:07.0", 00:17:11.094 "trid": { 00:17:11.094 "trtype": "PCIe", 00:17:11.094 "traddr": "0000:00:07.0" 00:17:11.094 }, 00:17:11.094 "ctrlr_data": { 00:17:11.094 "cntlid": 0, 00:17:11.094 "vendor_id": "0x1b36", 00:17:11.094 "model_number": "QEMU NVMe Ctrl", 00:17:11.094 "serial_number": "12341", 00:17:11.094 "firmware_revision": "8.0.0", 00:17:11.094 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:11.094 "oacs": { 00:17:11.094 "security": 0, 00:17:11.094 "format": 1, 00:17:11.094 "firmware": 0, 00:17:11.094 "ns_manage": 1 00:17:11.094 }, 00:17:11.094 "multi_ctrlr": false, 00:17:11.094 "ana_reporting": false 00:17:11.094 }, 00:17:11.094 "vs": { 00:17:11.094 "nvme_version": "1.4" 00:17:11.094 }, 00:17:11.094 "ns_data": { 00:17:11.094 "id": 1, 00:17:11.094 "can_share": false 00:17:11.094 } 00:17:11.094 } 00:17:11.094 ], 00:17:11.094 "mp_policy": "active_passive" 00:17:11.094 } 00:17:11.094 } 00:17:11.094 ]' 00:17:11.094 10:44:41 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:11.095 10:44:41 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:11.095 10:44:41 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:11.095 10:44:41 -- common/autotest_common.sh@1373 -- # nb=1310720 00:17:11.095 10:44:41 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:17:11.095 10:44:41 -- common/autotest_common.sh@1377 -- # echo 5120 00:17:11.095 10:44:41 -- ftl/common.sh@63 -- # base_size=5120 00:17:11.095 10:44:41 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:11.095 10:44:41 -- ftl/common.sh@67 -- # clear_lvols 00:17:11.095 10:44:41 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:11.095 10:44:41 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:11.353 10:44:41 -- ftl/common.sh@28 -- # stores=9915394c-9cf1-49d3-b957-c34d6cfaf70b 00:17:11.353 10:44:41 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:11.353 10:44:41 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9915394c-9cf1-49d3-b957-c34d6cfaf70b 00:17:11.353 10:44:41 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:11.612 10:44:42 -- ftl/common.sh@68 -- # lvs=0969ec1a-327f-43ef-9eba-bfd861c3e4ed 00:17:11.612 10:44:42 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0969ec1a-327f-43ef-9eba-bfd861c3e4ed 00:17:11.870 10:44:42 -- ftl/restore.sh@43 -- # split_bdev=8c695dec-3c21-4298-8a6e-eb213c7e009d 00:17:11.870 10:44:42 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:17:11.870 10:44:42 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 8c695dec-3c21-4298-8a6e-eb213c7e009d 00:17:11.870 10:44:42 -- ftl/common.sh@35 -- # local name=nvc0 00:17:11.870 10:44:42 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:11.870 10:44:42 -- ftl/common.sh@37 -- # local base_bdev=8c695dec-3c21-4298-8a6e-eb213c7e009d 00:17:11.870 10:44:42 -- ftl/common.sh@38 -- # local cache_size= 00:17:11.870 10:44:42 -- ftl/common.sh@41 -- # get_bdev_size 8c695dec-3c21-4298-8a6e-eb213c7e009d 00:17:11.870 10:44:42 -- common/autotest_common.sh@1367 -- # local bdev_name=8c695dec-3c21-4298-8a6e-eb213c7e009d 00:17:11.870 10:44:42 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:11.870 10:44:42 -- common/autotest_common.sh@1369 -- # local bs 00:17:11.870 10:44:42 -- common/autotest_common.sh@1370 -- # local nb 00:17:11.870 10:44:42 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8c695dec-3c21-4298-8a6e-eb213c7e009d 00:17:12.129 10:44:42 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:12.129 { 00:17:12.129 "name": "8c695dec-3c21-4298-8a6e-eb213c7e009d", 00:17:12.129 "aliases": [ 00:17:12.129 "lvs/nvme0n1p0" 00:17:12.129 ], 00:17:12.129 "product_name": "Logical Volume", 00:17:12.129 "block_size": 4096, 00:17:12.129 "num_blocks": 26476544, 00:17:12.129 "uuid": "8c695dec-3c21-4298-8a6e-eb213c7e009d", 00:17:12.129 "assigned_rate_limits": { 00:17:12.129 "rw_ios_per_sec": 0, 00:17:12.129 "rw_mbytes_per_sec": 0, 00:17:12.129 "r_mbytes_per_sec": 0, 00:17:12.129 "w_mbytes_per_sec": 0 00:17:12.129 }, 00:17:12.129 "claimed": false, 00:17:12.129 "zoned": false, 00:17:12.129 "supported_io_types": { 00:17:12.129 "read": true, 00:17:12.129 "write": true, 00:17:12.129 "unmap": true, 00:17:12.129 "write_zeroes": true, 00:17:12.129 "flush": false, 00:17:12.129 "reset": true, 00:17:12.129 "compare": false, 00:17:12.129 "compare_and_write": false, 00:17:12.129 "abort": false, 00:17:12.129 "nvme_admin": false, 00:17:12.129 "nvme_io": false 00:17:12.129 }, 00:17:12.129 "driver_specific": { 00:17:12.129 "lvol": { 00:17:12.129 "lvol_store_uuid": "0969ec1a-327f-43ef-9eba-bfd861c3e4ed", 00:17:12.129 "base_bdev": "nvme0n1", 00:17:12.129 "thin_provision": true, 00:17:12.129 "snapshot": false, 00:17:12.129 "clone": false, 00:17:12.129 "esnap_clone": false 00:17:12.129 } 00:17:12.129 } 00:17:12.129 } 00:17:12.129 ]' 00:17:12.129 10:44:42 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:12.129 10:44:42 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:12.129 10:44:42 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:12.129 10:44:42 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:12.129 10:44:42 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:12.129 10:44:42 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:12.129 10:44:42 -- ftl/common.sh@41 -- # local base_size=5171 00:17:12.129 10:44:42 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:12.129 10:44:42 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:12.413 10:44:42 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:12.413 10:44:42 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:12.413 10:44:42 -- ftl/common.sh@48 -- # get_bdev_size 8c695dec-3c21-4298-8a6e-eb213c7e009d 00:17:12.413 10:44:42 -- common/autotest_common.sh@1367 -- # local bdev_name=8c695dec-3c21-4298-8a6e-eb213c7e009d 00:17:12.413 10:44:42 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:12.413 10:44:42 -- common/autotest_common.sh@1369 -- # local bs 00:17:12.413 10:44:42 -- common/autotest_common.sh@1370 -- # local nb 00:17:12.413 10:44:42 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8c695dec-3c21-4298-8a6e-eb213c7e009d 00:17:12.413 10:44:42 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:12.413 { 00:17:12.413 "name": "8c695dec-3c21-4298-8a6e-eb213c7e009d", 00:17:12.413 "aliases": [ 00:17:12.413 "lvs/nvme0n1p0" 00:17:12.413 ], 00:17:12.413 "product_name": "Logical Volume", 00:17:12.413 "block_size": 4096, 00:17:12.413 "num_blocks": 26476544, 00:17:12.413 "uuid": "8c695dec-3c21-4298-8a6e-eb213c7e009d", 00:17:12.413 "assigned_rate_limits": { 00:17:12.413 "rw_ios_per_sec": 0, 00:17:12.413 "rw_mbytes_per_sec": 0, 00:17:12.413 "r_mbytes_per_sec": 0, 00:17:12.413 "w_mbytes_per_sec": 0 00:17:12.413 }, 00:17:12.413 "claimed": false, 00:17:12.413 "zoned": false, 00:17:12.413 "supported_io_types": { 00:17:12.413 "read": true, 00:17:12.413 "write": true, 00:17:12.413 "unmap": true, 00:17:12.413 "write_zeroes": true, 00:17:12.413 "flush": false, 00:17:12.413 "reset": true, 00:17:12.413 "compare": false, 00:17:12.413 "compare_and_write": false, 00:17:12.413 "abort": false, 00:17:12.413 "nvme_admin": false, 00:17:12.413 "nvme_io": false 00:17:12.413 }, 00:17:12.413 "driver_specific": { 00:17:12.413 "lvol": { 00:17:12.413 "lvol_store_uuid": "0969ec1a-327f-43ef-9eba-bfd861c3e4ed", 00:17:12.413 "base_bdev": "nvme0n1", 00:17:12.413 "thin_provision": true, 00:17:12.413 "snapshot": false, 00:17:12.413 "clone": false, 00:17:12.413 "esnap_clone": false 00:17:12.413 } 00:17:12.413 } 00:17:12.413 } 00:17:12.413 ]' 00:17:12.413 10:44:42 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:12.413 10:44:43 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:12.413 10:44:43 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:12.671 10:44:43 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:12.671 10:44:43 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:12.671 10:44:43 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:12.671 10:44:43 -- ftl/common.sh@48 -- # cache_size=5171 00:17:12.671 10:44:43 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:12.671 10:44:43 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:12.671 10:44:43 -- ftl/restore.sh@48 -- # get_bdev_size 8c695dec-3c21-4298-8a6e-eb213c7e009d 00:17:12.671 10:44:43 -- common/autotest_common.sh@1367 -- # local bdev_name=8c695dec-3c21-4298-8a6e-eb213c7e009d 00:17:12.671 10:44:43 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:12.671 10:44:43 -- common/autotest_common.sh@1369 -- # local bs 00:17:12.671 10:44:43 -- common/autotest_common.sh@1370 -- # local nb 00:17:12.671 10:44:43 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8c695dec-3c21-4298-8a6e-eb213c7e009d 00:17:12.930 10:44:43 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:12.930 { 00:17:12.930 "name": "8c695dec-3c21-4298-8a6e-eb213c7e009d", 00:17:12.930 "aliases": [ 00:17:12.930 "lvs/nvme0n1p0" 00:17:12.930 ], 00:17:12.930 "product_name": "Logical Volume", 00:17:12.930 "block_size": 4096, 00:17:12.930 "num_blocks": 26476544, 00:17:12.930 "uuid": "8c695dec-3c21-4298-8a6e-eb213c7e009d", 00:17:12.930 "assigned_rate_limits": { 00:17:12.930 "rw_ios_per_sec": 0, 00:17:12.930 "rw_mbytes_per_sec": 0, 00:17:12.930 "r_mbytes_per_sec": 0, 00:17:12.930 "w_mbytes_per_sec": 0 00:17:12.930 }, 00:17:12.930 "claimed": false, 00:17:12.930 "zoned": false, 00:17:12.930 "supported_io_types": { 00:17:12.930 "read": true, 00:17:12.930 "write": true, 00:17:12.930 "unmap": true, 00:17:12.930 "write_zeroes": true, 00:17:12.930 "flush": false, 00:17:12.930 "reset": true, 00:17:12.930 "compare": false, 00:17:12.930 "compare_and_write": false, 00:17:12.930 "abort": false, 00:17:12.930 "nvme_admin": false, 00:17:12.930 "nvme_io": false 00:17:12.930 }, 00:17:12.930 "driver_specific": { 00:17:12.930 "lvol": { 00:17:12.930 "lvol_store_uuid": "0969ec1a-327f-43ef-9eba-bfd861c3e4ed", 00:17:12.930 "base_bdev": "nvme0n1", 00:17:12.930 "thin_provision": true, 00:17:12.930 "snapshot": false, 00:17:12.930 "clone": false, 00:17:12.930 "esnap_clone": false 00:17:12.930 } 00:17:12.930 } 00:17:12.930 } 00:17:12.930 ]' 00:17:12.930 10:44:43 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:12.930 10:44:43 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:12.930 10:44:43 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:12.930 10:44:43 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:12.930 10:44:43 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:12.930 10:44:43 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:12.930 10:44:43 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:12.930 10:44:43 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8c695dec-3c21-4298-8a6e-eb213c7e009d --l2p_dram_limit 10' 00:17:12.930 10:44:43 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:12.930 10:44:43 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:17:12.930 10:44:43 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:12.930 10:44:43 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:12.930 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:12.930 10:44:43 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8c695dec-3c21-4298-8a6e-eb213c7e009d --l2p_dram_limit 10 -c nvc0n1p0 00:17:13.190 [2024-12-03 10:44:43.664213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.190 [2024-12-03 10:44:43.664259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:13.190 [2024-12-03 10:44:43.664273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:13.190 [2024-12-03 10:44:43.664281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.190 [2024-12-03 10:44:43.664319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.190 [2024-12-03 10:44:43.664327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:13.190 [2024-12-03 10:44:43.664335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:13.190 [2024-12-03 10:44:43.664341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.190 [2024-12-03 10:44:43.664358] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:13.190 [2024-12-03 10:44:43.664918] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:13.190 [2024-12-03 10:44:43.664939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.190 [2024-12-03 10:44:43.664946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:13.190 [2024-12-03 10:44:43.664954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:17:13.190 [2024-12-03 10:44:43.664960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.190 [2024-12-03 10:44:43.665294] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9da631bc-1ae5-44d2-a921-b83fcffa847e 00:17:13.190 [2024-12-03 10:44:43.666563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.190 [2024-12-03 10:44:43.666589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:13.190 [2024-12-03 10:44:43.666598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:13.190 [2024-12-03 10:44:43.666607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.190 [2024-12-03 10:44:43.673432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.190 [2024-12-03 10:44:43.673458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:13.190 [2024-12-03 10:44:43.673465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.782 ms 00:17:13.190 [2024-12-03 10:44:43.673473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.190 [2024-12-03 10:44:43.673540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.190 [2024-12-03 10:44:43.673549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:13.190 [2024-12-03 10:44:43.673556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:13.190 [2024-12-03 10:44:43.673566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.190 [2024-12-03 10:44:43.673603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.190 [2024-12-03 10:44:43.673615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:13.190 [2024-12-03 10:44:43.673622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:13.190 [2024-12-03 10:44:43.673630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.190 [2024-12-03 10:44:43.673649] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:13.190 [2024-12-03 10:44:43.676926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.190 [2024-12-03 10:44:43.676950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:13.190 [2024-12-03 10:44:43.676959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.282 ms 00:17:13.190 [2024-12-03 10:44:43.676965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.190 [2024-12-03 10:44:43.676994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.190 [2024-12-03 10:44:43.677000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:13.190 [2024-12-03 10:44:43.677008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:13.190 [2024-12-03 10:44:43.677014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.190 [2024-12-03 10:44:43.677033] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:13.190 [2024-12-03 10:44:43.677137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:13.190 [2024-12-03 10:44:43.677152] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:13.190 [2024-12-03 10:44:43.677160] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:13.190 [2024-12-03 10:44:43.677170] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:13.190 [2024-12-03 10:44:43.677177] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:13.190 [2024-12-03 10:44:43.677187] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:13.190 [2024-12-03 10:44:43.677199] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:13.190 [2024-12-03 10:44:43.677207] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:13.190 [2024-12-03 10:44:43.677213] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:13.190 [2024-12-03 10:44:43.677221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.190 [2024-12-03 10:44:43.677227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:13.190 [2024-12-03 10:44:43.677234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:17:13.190 [2024-12-03 10:44:43.677240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.190 [2024-12-03 10:44:43.677289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.190 [2024-12-03 10:44:43.677296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:13.190 [2024-12-03 10:44:43.677303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:13.190 [2024-12-03 10:44:43.677310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.190 [2024-12-03 10:44:43.677367] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:13.190 [2024-12-03 10:44:43.677375] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:13.190 [2024-12-03 10:44:43.677382] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:13.190 [2024-12-03 10:44:43.677389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.190 [2024-12-03 10:44:43.677396] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:13.190 [2024-12-03 10:44:43.677401] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:13.190 [2024-12-03 10:44:43.677407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:13.190 [2024-12-03 10:44:43.677412] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:13.190 [2024-12-03 10:44:43.677419] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:13.190 [2024-12-03 10:44:43.677425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:13.190 [2024-12-03 10:44:43.677432] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:13.190 [2024-12-03 10:44:43.677437] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:13.190 [2024-12-03 10:44:43.677445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:13.190 [2024-12-03 10:44:43.677450] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:13.190 [2024-12-03 10:44:43.677457] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:13.190 [2024-12-03 10:44:43.677462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.190 [2024-12-03 10:44:43.677470] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:13.190 [2024-12-03 10:44:43.677476] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:13.190 [2024-12-03 10:44:43.677483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.190 [2024-12-03 10:44:43.677488] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:13.190 [2024-12-03 10:44:43.677494] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:13.190 [2024-12-03 10:44:43.677500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:13.190 [2024-12-03 10:44:43.677507] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:13.190 [2024-12-03 10:44:43.677512] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:13.190 [2024-12-03 10:44:43.677519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:13.190 [2024-12-03 10:44:43.677524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:13.190 [2024-12-03 10:44:43.677531] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:13.190 [2024-12-03 10:44:43.677536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:13.190 [2024-12-03 10:44:43.677542] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:13.190 [2024-12-03 10:44:43.677547] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:13.190 [2024-12-03 10:44:43.677553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:13.190 [2024-12-03 10:44:43.677558] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:13.190 [2024-12-03 10:44:43.677566] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:13.190 [2024-12-03 10:44:43.677571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:13.190 [2024-12-03 10:44:43.677577] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:13.190 [2024-12-03 10:44:43.677582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:13.190 [2024-12-03 10:44:43.677721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:13.190 [2024-12-03 10:44:43.677731] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:13.190 [2024-12-03 10:44:43.677739] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:13.190 [2024-12-03 10:44:43.677744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:13.190 [2024-12-03 10:44:43.677750] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:13.190 [2024-12-03 10:44:43.677759] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:13.190 [2024-12-03 10:44:43.677766] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:13.190 [2024-12-03 10:44:43.677772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.190 [2024-12-03 10:44:43.677781] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:13.190 [2024-12-03 10:44:43.677786] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:13.190 [2024-12-03 10:44:43.677793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:13.190 [2024-12-03 10:44:43.677798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:13.190 [2024-12-03 10:44:43.677806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:13.190 [2024-12-03 10:44:43.677812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:13.190 [2024-12-03 10:44:43.677819] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:13.190 [2024-12-03 10:44:43.677826] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:13.190 [2024-12-03 10:44:43.677834] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:13.190 [2024-12-03 10:44:43.677839] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:13.190 [2024-12-03 10:44:43.677846] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:13.190 [2024-12-03 10:44:43.677851] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:13.191 [2024-12-03 10:44:43.677858] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:13.191 [2024-12-03 10:44:43.677863] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:13.191 [2024-12-03 10:44:43.677870] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:13.191 [2024-12-03 10:44:43.677875] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:13.191 [2024-12-03 10:44:43.677882] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:13.191 [2024-12-03 10:44:43.677887] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:13.191 [2024-12-03 10:44:43.677894] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:13.191 [2024-12-03 10:44:43.677900] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:13.191 [2024-12-03 10:44:43.677910] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:13.191 [2024-12-03 10:44:43.677915] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:13.191 [2024-12-03 10:44:43.677923] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:13.191 [2024-12-03 10:44:43.677929] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:13.191 [2024-12-03 10:44:43.677936] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:13.191 [2024-12-03 10:44:43.677941] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:13.191 [2024-12-03 10:44:43.677948] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:13.191 [2024-12-03 10:44:43.677954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.191 [2024-12-03 10:44:43.677961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:13.191 [2024-12-03 10:44:43.677967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:17:13.191 [2024-12-03 10:44:43.677975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.191 [2024-12-03 10:44:43.691778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.191 [2024-12-03 10:44:43.691806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:13.191 [2024-12-03 10:44:43.691815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.764 ms 00:17:13.191 [2024-12-03 10:44:43.691823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.191 [2024-12-03 10:44:43.691894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.191 [2024-12-03 10:44:43.691904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:13.191 [2024-12-03 10:44:43.691913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:13.191 [2024-12-03 10:44:43.691920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.191 [2024-12-03 10:44:43.718332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.191 [2024-12-03 10:44:43.718359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:13.191 [2024-12-03 10:44:43.718367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.381 ms 00:17:13.191 [2024-12-03 10:44:43.718376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.191 [2024-12-03 10:44:43.718399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.191 [2024-12-03 10:44:43.718407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:13.191 [2024-12-03 10:44:43.718414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:13.191 [2024-12-03 10:44:43.718423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.191 [2024-12-03 10:44:43.718817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.191 [2024-12-03 10:44:43.718844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:13.191 [2024-12-03 10:44:43.718852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:17:13.191 [2024-12-03 10:44:43.718861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.191 [2024-12-03 10:44:43.718954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.191 [2024-12-03 10:44:43.718969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:13.191 [2024-12-03 10:44:43.718976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:13.191 [2024-12-03 10:44:43.718984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.191 [2024-12-03 10:44:43.732808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.191 [2024-12-03 10:44:43.732833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:13.191 [2024-12-03 10:44:43.732841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.810 ms 00:17:13.191 [2024-12-03 10:44:43.732849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.191 [2024-12-03 10:44:43.742781] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:13.191 [2024-12-03 10:44:43.745703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.191 [2024-12-03 10:44:43.745726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:13.191 [2024-12-03 10:44:43.745736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.796 ms 00:17:13.191 [2024-12-03 10:44:43.745743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.449 [2024-12-03 10:44:43.829168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.449 [2024-12-03 10:44:43.829199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:13.449 [2024-12-03 10:44:43.829211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.402 ms 00:17:13.449 [2024-12-03 10:44:43.829217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.449 [2024-12-03 10:44:43.829250] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:13.449 [2024-12-03 10:44:43.829259] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:16.737 [2024-12-03 10:44:47.306044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.737 [2024-12-03 10:44:47.306111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:16.737 [2024-12-03 10:44:47.306127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3476.781 ms 00:17:16.737 [2024-12-03 10:44:47.306134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.737 [2024-12-03 10:44:47.306298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.737 [2024-12-03 10:44:47.306308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:16.737 [2024-12-03 10:44:47.306320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:17:16.737 [2024-12-03 10:44:47.306326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.737 [2024-12-03 10:44:47.325758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.737 [2024-12-03 10:44:47.325788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:16.737 [2024-12-03 10:44:47.325800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.396 ms 00:17:16.737 [2024-12-03 10:44:47.325806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.737 [2024-12-03 10:44:47.343761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.737 [2024-12-03 10:44:47.343786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:16.737 [2024-12-03 10:44:47.343799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.923 ms 00:17:16.737 [2024-12-03 10:44:47.343805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.737 [2024-12-03 10:44:47.344068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.737 [2024-12-03 10:44:47.344077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:16.737 [2024-12-03 10:44:47.344086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:17:16.737 [2024-12-03 10:44:47.344092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.995 [2024-12-03 10:44:47.396305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.995 [2024-12-03 10:44:47.396331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:16.995 [2024-12-03 10:44:47.396342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.177 ms 00:17:16.995 [2024-12-03 10:44:47.396348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.995 [2024-12-03 10:44:47.416188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.995 [2024-12-03 10:44:47.416215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:16.995 [2024-12-03 10:44:47.416226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.808 ms 00:17:16.995 [2024-12-03 10:44:47.416232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.995 [2024-12-03 10:44:47.417624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.995 [2024-12-03 10:44:47.417649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:16.995 [2024-12-03 10:44:47.417660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:17:16.995 [2024-12-03 10:44:47.417666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.995 [2024-12-03 10:44:47.436599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.995 [2024-12-03 10:44:47.436625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:16.995 [2024-12-03 10:44:47.436635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.904 ms 00:17:16.995 [2024-12-03 10:44:47.436641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.995 [2024-12-03 10:44:47.436679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.995 [2024-12-03 10:44:47.436686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:16.995 [2024-12-03 10:44:47.436695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:16.995 [2024-12-03 10:44:47.436700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.995 [2024-12-03 10:44:47.436770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.995 [2024-12-03 10:44:47.436777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:16.995 [2024-12-03 10:44:47.436785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:16.995 [2024-12-03 10:44:47.436792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.995 [2024-12-03 10:44:47.437700] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3773.101 ms, result 0 00:17:16.995 { 00:17:16.995 "name": "ftl0", 00:17:16.995 "uuid": "9da631bc-1ae5-44d2-a921-b83fcffa847e" 00:17:16.995 } 00:17:16.995 10:44:47 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:16.995 10:44:47 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:17.253 10:44:47 -- ftl/restore.sh@63 -- # echo ']}' 00:17:17.253 10:44:47 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:17.253 [2024-12-03 10:44:47.813116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.253 [2024-12-03 10:44:47.813154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:17.253 [2024-12-03 10:44:47.813163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:17.253 [2024-12-03 10:44:47.813171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.253 [2024-12-03 10:44:47.813190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:17.253 [2024-12-03 10:44:47.815467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.253 [2024-12-03 10:44:47.815487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:17.253 [2024-12-03 10:44:47.815498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 00:17:17.253 [2024-12-03 10:44:47.815518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.253 [2024-12-03 10:44:47.815720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.253 [2024-12-03 10:44:47.815729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:17.253 [2024-12-03 10:44:47.815738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:17:17.253 [2024-12-03 10:44:47.815744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.253 [2024-12-03 10:44:47.818229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.253 [2024-12-03 10:44:47.818246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:17.253 [2024-12-03 10:44:47.818255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.471 ms 00:17:17.253 [2024-12-03 10:44:47.818262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.253 [2024-12-03 10:44:47.822963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.253 [2024-12-03 10:44:47.822986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:17.253 [2024-12-03 10:44:47.822995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.684 ms 00:17:17.253 [2024-12-03 10:44:47.823000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.253 [2024-12-03 10:44:47.841343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.253 [2024-12-03 10:44:47.841368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:17.253 [2024-12-03 10:44:47.841377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.270 ms 00:17:17.253 [2024-12-03 10:44:47.841383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.253 [2024-12-03 10:44:47.854656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.253 [2024-12-03 10:44:47.854683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:17.253 [2024-12-03 10:44:47.854694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.242 ms 00:17:17.253 [2024-12-03 10:44:47.854700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.253 [2024-12-03 10:44:47.854816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.253 [2024-12-03 10:44:47.854824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:17.253 [2024-12-03 10:44:47.854833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:17.253 [2024-12-03 10:44:47.854842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.512 [2024-12-03 10:44:47.873551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.512 [2024-12-03 10:44:47.873576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:17.512 [2024-12-03 10:44:47.873586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.692 ms 00:17:17.512 [2024-12-03 10:44:47.873591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.512 [2024-12-03 10:44:47.891987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.512 [2024-12-03 10:44:47.892011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:17.512 [2024-12-03 10:44:47.892020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.367 ms 00:17:17.512 [2024-12-03 10:44:47.892026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.512 [2024-12-03 10:44:47.909953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.512 [2024-12-03 10:44:47.909977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:17.512 [2024-12-03 10:44:47.909987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.898 ms 00:17:17.512 [2024-12-03 10:44:47.909992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.512 [2024-12-03 10:44:47.927758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.512 [2024-12-03 10:44:47.927782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:17.512 [2024-12-03 10:44:47.927791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.711 ms 00:17:17.512 [2024-12-03 10:44:47.927796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.512 [2024-12-03 10:44:47.927826] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:17.512 [2024-12-03 10:44:47.927841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:17.512 [2024-12-03 10:44:47.927851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:17.512 [2024-12-03 10:44:47.927858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:17.512 [2024-12-03 10:44:47.927866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:17.512 [2024-12-03 10:44:47.927872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:17.512 [2024-12-03 10:44:47.927879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:17.512 [2024-12-03 10:44:47.927885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:17.512 [2024-12-03 10:44:47.927892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:17.512 [2024-12-03 10:44:47.927898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:17.512 [2024-12-03 10:44:47.927906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:17.512 [2024-12-03 10:44:47.927911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:17.512 [2024-12-03 10:44:47.927918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:17.512 [2024-12-03 10:44:47.927924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:17.512 [2024-12-03 10:44:47.927931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:17.512 [2024-12-03 10:44:47.927937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.927945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.927951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.927959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.927965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.927972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.927978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.927985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.927991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.927998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:17.513 [2024-12-03 10:44:47.928538] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:17.513 [2024-12-03 10:44:47.928546] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9da631bc-1ae5-44d2-a921-b83fcffa847e 00:17:17.513 [2024-12-03 10:44:47.928552] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:17.513 [2024-12-03 10:44:47.928559] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:17.513 [2024-12-03 10:44:47.928565] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:17.514 [2024-12-03 10:44:47.928573] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:17.514 [2024-12-03 10:44:47.928579] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:17.514 [2024-12-03 10:44:47.928586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:17.514 [2024-12-03 10:44:47.928591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:17.514 [2024-12-03 10:44:47.928598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:17.514 [2024-12-03 10:44:47.928602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:17.514 [2024-12-03 10:44:47.928611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.514 [2024-12-03 10:44:47.928616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:17.514 [2024-12-03 10:44:47.928626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:17:17.514 [2024-12-03 10:44:47.928631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:47.939021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.514 [2024-12-03 10:44:47.939044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:17.514 [2024-12-03 10:44:47.939059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.365 ms 00:17:17.514 [2024-12-03 10:44:47.939066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:47.939215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.514 [2024-12-03 10:44:47.939224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:17.514 [2024-12-03 10:44:47.939232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:17:17.514 [2024-12-03 10:44:47.939238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:47.976404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.514 [2024-12-03 10:44:47.976430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:17.514 [2024-12-03 10:44:47.976440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.514 [2024-12-03 10:44:47.976446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:47.976499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.514 [2024-12-03 10:44:47.976507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:17.514 [2024-12-03 10:44:47.976516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.514 [2024-12-03 10:44:47.976521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:47.976571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.514 [2024-12-03 10:44:47.976579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:17.514 [2024-12-03 10:44:47.976587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.514 [2024-12-03 10:44:47.976593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:47.976607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.514 [2024-12-03 10:44:47.976615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:17.514 [2024-12-03 10:44:47.976624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.514 [2024-12-03 10:44:47.976629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:48.039142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.514 [2024-12-03 10:44:48.039175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:17.514 [2024-12-03 10:44:48.039187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.514 [2024-12-03 10:44:48.039194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:48.062835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.514 [2024-12-03 10:44:48.062864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:17.514 [2024-12-03 10:44:48.062874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.514 [2024-12-03 10:44:48.062881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:48.062936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.514 [2024-12-03 10:44:48.062944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:17.514 [2024-12-03 10:44:48.062952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.514 [2024-12-03 10:44:48.062958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:48.062996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.514 [2024-12-03 10:44:48.063003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:17.514 [2024-12-03 10:44:48.063011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.514 [2024-12-03 10:44:48.063019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:48.063115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.514 [2024-12-03 10:44:48.063124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:17.514 [2024-12-03 10:44:48.063132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.514 [2024-12-03 10:44:48.063138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:48.063167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.514 [2024-12-03 10:44:48.063175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:17.514 [2024-12-03 10:44:48.063183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.514 [2024-12-03 10:44:48.063189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:48.063226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.514 [2024-12-03 10:44:48.063233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:17.514 [2024-12-03 10:44:48.063241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.514 [2024-12-03 10:44:48.063247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:48.063288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.514 [2024-12-03 10:44:48.063295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:17.514 [2024-12-03 10:44:48.063303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.514 [2024-12-03 10:44:48.063311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.514 [2024-12-03 10:44:48.063428] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 250.271 ms, result 0 00:17:17.514 true 00:17:17.514 10:44:48 -- ftl/restore.sh@66 -- # killprocess 73048 00:17:17.514 10:44:48 -- common/autotest_common.sh@936 -- # '[' -z 73048 ']' 00:17:17.514 10:44:48 -- common/autotest_common.sh@940 -- # kill -0 73048 00:17:17.514 10:44:48 -- common/autotest_common.sh@941 -- # uname 00:17:17.514 10:44:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:17.514 10:44:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73048 00:17:17.514 10:44:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:17.514 killing process with pid 73048 00:17:17.514 10:44:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:17.514 10:44:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73048' 00:17:17.514 10:44:48 -- common/autotest_common.sh@955 -- # kill 73048 00:17:17.514 10:44:48 -- common/autotest_common.sh@960 -- # wait 73048 00:17:25.660 10:44:55 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:28.965 262144+0 records in 00:17:28.965 262144+0 records out 00:17:28.965 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.61702 s, 297 MB/s 00:17:28.965 10:44:59 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:30.885 10:45:01 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:30.885 [2024-12-03 10:45:01.298414] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:30.885 [2024-12-03 10:45:01.298743] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73269 ] 00:17:30.885 [2024-12-03 10:45:01.441912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.146 [2024-12-03 10:45:01.707375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.716 [2024-12-03 10:45:02.030845] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.716 [2024-12-03 10:45:02.030936] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.716 [2024-12-03 10:45:02.192009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.192089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:31.717 [2024-12-03 10:45:02.192104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:31.717 [2024-12-03 10:45:02.192116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.192174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.192185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.717 [2024-12-03 10:45:02.192194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:31.717 [2024-12-03 10:45:02.192202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.192230] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:31.717 [2024-12-03 10:45:02.193265] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:31.717 [2024-12-03 10:45:02.193318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.193330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.717 [2024-12-03 10:45:02.193341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.094 ms 00:17:31.717 [2024-12-03 10:45:02.193349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.195127] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:31.717 [2024-12-03 10:45:02.209764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.209818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:31.717 [2024-12-03 10:45:02.209832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.639 ms 00:17:31.717 [2024-12-03 10:45:02.209841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.209920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.209931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:31.717 [2024-12-03 10:45:02.209941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:31.717 [2024-12-03 10:45:02.209948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.218257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.218302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.717 [2024-12-03 10:45:02.218313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.228 ms 00:17:31.717 [2024-12-03 10:45:02.218322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.218422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.218433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.717 [2024-12-03 10:45:02.218443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:31.717 [2024-12-03 10:45:02.218451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.218500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.218510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:31.717 [2024-12-03 10:45:02.218518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:31.717 [2024-12-03 10:45:02.218526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.218559] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:31.717 [2024-12-03 10:45:02.222793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.222830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.717 [2024-12-03 10:45:02.222844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.248 ms 00:17:31.717 [2024-12-03 10:45:02.222853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.222893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.222903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:31.717 [2024-12-03 10:45:02.222914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:31.717 [2024-12-03 10:45:02.222926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.222978] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:31.717 [2024-12-03 10:45:02.223003] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:31.717 [2024-12-03 10:45:02.223039] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:31.717 [2024-12-03 10:45:02.223074] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:31.717 [2024-12-03 10:45:02.223151] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:31.717 [2024-12-03 10:45:02.223165] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:31.717 [2024-12-03 10:45:02.223179] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:31.717 [2024-12-03 10:45:02.223191] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:31.717 [2024-12-03 10:45:02.223202] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:31.717 [2024-12-03 10:45:02.223211] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:31.717 [2024-12-03 10:45:02.223219] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:31.717 [2024-12-03 10:45:02.223227] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:31.717 [2024-12-03 10:45:02.223235] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:31.717 [2024-12-03 10:45:02.223244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.223252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:31.717 [2024-12-03 10:45:02.223262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:17:31.717 [2024-12-03 10:45:02.223271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.223333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.223342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:31.717 [2024-12-03 10:45:02.223351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:31.717 [2024-12-03 10:45:02.223359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.223430] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:31.717 [2024-12-03 10:45:02.223439] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:31.717 [2024-12-03 10:45:02.223448] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.717 [2024-12-03 10:45:02.223475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.717 [2024-12-03 10:45:02.223483] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:31.717 [2024-12-03 10:45:02.223490] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:31.717 [2024-12-03 10:45:02.223498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:31.717 [2024-12-03 10:45:02.223507] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:31.717 [2024-12-03 10:45:02.223514] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:31.717 [2024-12-03 10:45:02.223521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.717 [2024-12-03 10:45:02.223528] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:31.717 [2024-12-03 10:45:02.223535] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:31.717 [2024-12-03 10:45:02.223542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.717 [2024-12-03 10:45:02.223549] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:31.717 [2024-12-03 10:45:02.223556] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:31.717 [2024-12-03 10:45:02.223563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.717 [2024-12-03 10:45:02.223578] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:31.717 [2024-12-03 10:45:02.223585] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:31.717 [2024-12-03 10:45:02.223591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.717 [2024-12-03 10:45:02.223598] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:31.717 [2024-12-03 10:45:02.223605] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:31.717 [2024-12-03 10:45:02.223612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:31.717 [2024-12-03 10:45:02.223621] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:31.717 [2024-12-03 10:45:02.223627] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:31.717 [2024-12-03 10:45:02.223634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:31.717 [2024-12-03 10:45:02.223641] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:31.717 [2024-12-03 10:45:02.223647] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:31.717 [2024-12-03 10:45:02.223654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:31.717 [2024-12-03 10:45:02.223661] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:31.717 [2024-12-03 10:45:02.223668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:31.717 [2024-12-03 10:45:02.223675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:31.717 [2024-12-03 10:45:02.223682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:31.717 [2024-12-03 10:45:02.223688] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:31.717 [2024-12-03 10:45:02.223695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:31.717 [2024-12-03 10:45:02.223702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:31.717 [2024-12-03 10:45:02.223708] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:31.717 [2024-12-03 10:45:02.223715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.717 [2024-12-03 10:45:02.223721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:31.717 [2024-12-03 10:45:02.223728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:31.717 [2024-12-03 10:45:02.223736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.717 [2024-12-03 10:45:02.223742] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:31.717 [2024-12-03 10:45:02.223753] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:31.717 [2024-12-03 10:45:02.223761] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.717 [2024-12-03 10:45:02.223769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.717 [2024-12-03 10:45:02.223777] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:31.717 [2024-12-03 10:45:02.223784] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:31.717 [2024-12-03 10:45:02.223791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:31.717 [2024-12-03 10:45:02.223798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:31.717 [2024-12-03 10:45:02.223804] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:31.717 [2024-12-03 10:45:02.223811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:31.717 [2024-12-03 10:45:02.223818] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:31.717 [2024-12-03 10:45:02.223828] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.717 [2024-12-03 10:45:02.223836] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:31.717 [2024-12-03 10:45:02.223843] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:31.717 [2024-12-03 10:45:02.223851] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:31.717 [2024-12-03 10:45:02.223858] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:31.717 [2024-12-03 10:45:02.223865] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:31.717 [2024-12-03 10:45:02.223872] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:31.717 [2024-12-03 10:45:02.223879] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:31.717 [2024-12-03 10:45:02.223886] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:31.717 [2024-12-03 10:45:02.223894] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:31.717 [2024-12-03 10:45:02.223901] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:31.717 [2024-12-03 10:45:02.223908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:31.717 [2024-12-03 10:45:02.223916] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:31.717 [2024-12-03 10:45:02.223924] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:31.717 [2024-12-03 10:45:02.223932] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:31.717 [2024-12-03 10:45:02.223940] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.717 [2024-12-03 10:45:02.223948] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:31.717 [2024-12-03 10:45:02.223956] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:31.717 [2024-12-03 10:45:02.223963] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:31.717 [2024-12-03 10:45:02.223970] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:31.717 [2024-12-03 10:45:02.223979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.223987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:31.717 [2024-12-03 10:45:02.223995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:17:31.717 [2024-12-03 10:45:02.224003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.242353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.242407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.717 [2024-12-03 10:45:02.242420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.308 ms 00:17:31.717 [2024-12-03 10:45:02.242437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.242528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.242537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:31.717 [2024-12-03 10:45:02.242546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:31.717 [2024-12-03 10:45:02.242553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-12-03 10:45:02.286660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-12-03 10:45:02.286716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.718 [2024-12-03 10:45:02.286730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.053 ms 00:17:31.718 [2024-12-03 10:45:02.286739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.718 [2024-12-03 10:45:02.286790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.718 [2024-12-03 10:45:02.286801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.718 [2024-12-03 10:45:02.286811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:31.718 [2024-12-03 10:45:02.286819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.718 [2024-12-03 10:45:02.287440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.718 [2024-12-03 10:45:02.287504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.718 [2024-12-03 10:45:02.287516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:17:31.718 [2024-12-03 10:45:02.287532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.718 [2024-12-03 10:45:02.287660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.718 [2024-12-03 10:45:02.287670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.718 [2024-12-03 10:45:02.287679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:17:31.718 [2024-12-03 10:45:02.287687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.718 [2024-12-03 10:45:02.304519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.718 [2024-12-03 10:45:02.304567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.718 [2024-12-03 10:45:02.304579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.807 ms 00:17:31.718 [2024-12-03 10:45:02.304587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.718 [2024-12-03 10:45:02.319880] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:31.718 [2024-12-03 10:45:02.319945] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:31.718 [2024-12-03 10:45:02.319961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.718 [2024-12-03 10:45:02.319971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:31.718 [2024-12-03 10:45:02.319982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.261 ms 00:17:31.718 [2024-12-03 10:45:02.319990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.977 [2024-12-03 10:45:02.346064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.977 [2024-12-03 10:45:02.346119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:31.977 [2024-12-03 10:45:02.346132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.017 ms 00:17:31.977 [2024-12-03 10:45:02.346141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.977 [2024-12-03 10:45:02.359109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.977 [2024-12-03 10:45:02.359159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:31.977 [2024-12-03 10:45:02.359171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.913 ms 00:17:31.977 [2024-12-03 10:45:02.359180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.977 [2024-12-03 10:45:02.372142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.977 [2024-12-03 10:45:02.372192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:31.977 [2024-12-03 10:45:02.372213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.914 ms 00:17:31.977 [2024-12-03 10:45:02.372221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.977 [2024-12-03 10:45:02.372612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.977 [2024-12-03 10:45:02.372636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:31.977 [2024-12-03 10:45:02.372646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:17:31.977 [2024-12-03 10:45:02.372654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.977 [2024-12-03 10:45:02.439938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.977 [2024-12-03 10:45:02.439995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:31.977 [2024-12-03 10:45:02.440010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.266 ms 00:17:31.977 [2024-12-03 10:45:02.440018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.977 [2024-12-03 10:45:02.451970] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:31.977 [2024-12-03 10:45:02.455043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.977 [2024-12-03 10:45:02.455101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:31.977 [2024-12-03 10:45:02.455114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.948 ms 00:17:31.977 [2024-12-03 10:45:02.455122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.977 [2024-12-03 10:45:02.455200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.977 [2024-12-03 10:45:02.455210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:31.977 [2024-12-03 10:45:02.455220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:31.977 [2024-12-03 10:45:02.455229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.977 [2024-12-03 10:45:02.455298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.977 [2024-12-03 10:45:02.455310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:31.977 [2024-12-03 10:45:02.455319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:31.977 [2024-12-03 10:45:02.455327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.977 [2024-12-03 10:45:02.456711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.977 [2024-12-03 10:45:02.456758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:31.977 [2024-12-03 10:45:02.456769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.364 ms 00:17:31.977 [2024-12-03 10:45:02.456777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.977 [2024-12-03 10:45:02.456814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.977 [2024-12-03 10:45:02.456824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:31.977 [2024-12-03 10:45:02.456833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:31.977 [2024-12-03 10:45:02.456849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.977 [2024-12-03 10:45:02.456887] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:31.977 [2024-12-03 10:45:02.456900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.977 [2024-12-03 10:45:02.456909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:31.977 [2024-12-03 10:45:02.456922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:31.977 [2024-12-03 10:45:02.456929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.977 [2024-12-03 10:45:02.483367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.977 [2024-12-03 10:45:02.483423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:31.977 [2024-12-03 10:45:02.483438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.417 ms 00:17:31.977 [2024-12-03 10:45:02.483446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.977 [2024-12-03 10:45:02.483541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.978 [2024-12-03 10:45:02.483560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:31.978 [2024-12-03 10:45:02.483571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:31.978 [2024-12-03 10:45:02.483579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.978 [2024-12-03 10:45:02.484955] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 292.458 ms, result 0 00:17:32.920  [2024-12-03T10:45:04.919Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-03T10:45:05.866Z] Copying: 41/1024 [MB] (22 MBps) [2024-12-03T10:45:06.834Z] Copying: 59/1024 [MB] (17 MBps) [2024-12-03T10:45:07.778Z] Copying: 72/1024 [MB] (12 MBps) [2024-12-03T10:45:08.723Z] Copying: 90/1024 [MB] (18 MBps) [2024-12-03T10:45:09.667Z] Copying: 102/1024 [MB] (11 MBps) [2024-12-03T10:45:10.609Z] Copying: 118/1024 [MB] (16 MBps) [2024-12-03T10:45:11.551Z] Copying: 132/1024 [MB] (13 MBps) [2024-12-03T10:45:12.940Z] Copying: 150/1024 [MB] (17 MBps) [2024-12-03T10:45:13.513Z] Copying: 168/1024 [MB] (18 MBps) [2024-12-03T10:45:14.901Z] Copying: 184/1024 [MB] (15 MBps) [2024-12-03T10:45:15.842Z] Copying: 206/1024 [MB] (22 MBps) [2024-12-03T10:45:16.784Z] Copying: 239/1024 [MB] (33 MBps) [2024-12-03T10:45:17.725Z] Copying: 254/1024 [MB] (15 MBps) [2024-12-03T10:45:18.665Z] Copying: 271/1024 [MB] (16 MBps) [2024-12-03T10:45:19.654Z] Copying: 287/1024 [MB] (15 MBps) [2024-12-03T10:45:20.596Z] Copying: 309/1024 [MB] (22 MBps) [2024-12-03T10:45:21.539Z] Copying: 325/1024 [MB] (15 MBps) [2024-12-03T10:45:22.926Z] Copying: 344/1024 [MB] (19 MBps) [2024-12-03T10:45:23.499Z] Copying: 363/1024 [MB] (18 MBps) [2024-12-03T10:45:24.881Z] Copying: 380/1024 [MB] (17 MBps) [2024-12-03T10:45:25.824Z] Copying: 423/1024 [MB] (42 MBps) [2024-12-03T10:45:26.769Z] Copying: 449/1024 [MB] (26 MBps) [2024-12-03T10:45:27.713Z] Copying: 466/1024 [MB] (16 MBps) [2024-12-03T10:45:28.657Z] Copying: 489/1024 [MB] (22 MBps) [2024-12-03T10:45:29.602Z] Copying: 501/1024 [MB] (11 MBps) [2024-12-03T10:45:30.546Z] Copying: 515/1024 [MB] (14 MBps) [2024-12-03T10:45:31.933Z] Copying: 530/1024 [MB] (14 MBps) [2024-12-03T10:45:32.507Z] Copying: 547/1024 [MB] (17 MBps) [2024-12-03T10:45:33.895Z] Copying: 567/1024 [MB] (19 MBps) [2024-12-03T10:45:34.840Z] Copying: 587/1024 [MB] (20 MBps) [2024-12-03T10:45:35.784Z] Copying: 605/1024 [MB] (17 MBps) [2024-12-03T10:45:36.728Z] Copying: 623/1024 [MB] (18 MBps) [2024-12-03T10:45:37.671Z] Copying: 645/1024 [MB] (21 MBps) [2024-12-03T10:45:38.614Z] Copying: 660/1024 [MB] (15 MBps) [2024-12-03T10:45:39.556Z] Copying: 677/1024 [MB] (16 MBps) [2024-12-03T10:45:40.941Z] Copying: 697/1024 [MB] (19 MBps) [2024-12-03T10:45:41.514Z] Copying: 709/1024 [MB] (12 MBps) [2024-12-03T10:45:42.900Z] Copying: 721/1024 [MB] (12 MBps) [2024-12-03T10:45:43.842Z] Copying: 733/1024 [MB] (11 MBps) [2024-12-03T10:45:44.787Z] Copying: 748/1024 [MB] (15 MBps) [2024-12-03T10:45:45.726Z] Copying: 769/1024 [MB] (21 MBps) [2024-12-03T10:45:46.666Z] Copying: 785/1024 [MB] (15 MBps) [2024-12-03T10:45:47.610Z] Copying: 798/1024 [MB] (13 MBps) [2024-12-03T10:45:48.555Z] Copying: 813/1024 [MB] (14 MBps) [2024-12-03T10:45:49.942Z] Copying: 829/1024 [MB] (16 MBps) [2024-12-03T10:45:50.514Z] Copying: 847/1024 [MB] (17 MBps) [2024-12-03T10:45:51.517Z] Copying: 866/1024 [MB] (18 MBps) [2024-12-03T10:45:52.907Z] Copying: 884/1024 [MB] (18 MBps) [2024-12-03T10:45:53.850Z] Copying: 905/1024 [MB] (20 MBps) [2024-12-03T10:45:54.793Z] Copying: 917/1024 [MB] (12 MBps) [2024-12-03T10:45:55.735Z] Copying: 930/1024 [MB] (12 MBps) [2024-12-03T10:45:56.679Z] Copying: 941/1024 [MB] (11 MBps) [2024-12-03T10:45:57.618Z] Copying: 952/1024 [MB] (11 MBps) [2024-12-03T10:45:58.560Z] Copying: 963/1024 [MB] (10 MBps) [2024-12-03T10:45:59.501Z] Copying: 975/1024 [MB] (11 MBps) [2024-12-03T10:46:00.887Z] Copying: 986/1024 [MB] (11 MBps) [2024-12-03T10:46:01.827Z] Copying: 997/1024 [MB] (11 MBps) [2024-12-03T10:46:02.770Z] Copying: 1008/1024 [MB] (10 MBps) [2024-12-03T10:46:03.032Z] Copying: 1019/1024 [MB] (11 MBps) [2024-12-03T10:46:03.032Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-03 10:46:02.945165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.419 [2024-12-03 10:46:02.945208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:32.419 [2024-12-03 10:46:02.945221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:32.419 [2024-12-03 10:46:02.945230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.419 [2024-12-03 10:46:02.945252] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:32.419 [2024-12-03 10:46:02.948034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.419 [2024-12-03 10:46:02.948073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:32.419 [2024-12-03 10:46:02.948090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:18:32.419 [2024-12-03 10:46:02.948098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.419 [2024-12-03 10:46:02.950808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.419 [2024-12-03 10:46:02.950842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:32.419 [2024-12-03 10:46:02.950852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.690 ms 00:18:32.419 [2024-12-03 10:46:02.950860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.419 [2024-12-03 10:46:02.966969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.419 [2024-12-03 10:46:02.967008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:32.419 [2024-12-03 10:46:02.967019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.094 ms 00:18:32.419 [2024-12-03 10:46:02.967032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.419 [2024-12-03 10:46:02.973172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.419 [2024-12-03 10:46:02.973205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:32.419 [2024-12-03 10:46:02.973215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.099 ms 00:18:32.419 [2024-12-03 10:46:02.973222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.419 [2024-12-03 10:46:02.999297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.419 [2024-12-03 10:46:02.999342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:32.419 [2024-12-03 10:46:02.999354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.016 ms 00:18:32.419 [2024-12-03 10:46:02.999361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.419 [2024-12-03 10:46:03.015760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.419 [2024-12-03 10:46:03.015806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:32.419 [2024-12-03 10:46:03.015818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.357 ms 00:18:32.419 [2024-12-03 10:46:03.015826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.419 [2024-12-03 10:46:03.015986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.419 [2024-12-03 10:46:03.015999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:32.419 [2024-12-03 10:46:03.016008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:18:32.419 [2024-12-03 10:46:03.016016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.687 [2024-12-03 10:46:03.042140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.687 [2024-12-03 10:46:03.042184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:32.687 [2024-12-03 10:46:03.042196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.110 ms 00:18:32.687 [2024-12-03 10:46:03.042203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.687 [2024-12-03 10:46:03.067540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.687 [2024-12-03 10:46:03.067584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:32.687 [2024-12-03 10:46:03.067595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.295 ms 00:18:32.687 [2024-12-03 10:46:03.067613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.687 [2024-12-03 10:46:03.092558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.687 [2024-12-03 10:46:03.092603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:32.687 [2024-12-03 10:46:03.092614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.904 ms 00:18:32.687 [2024-12-03 10:46:03.092620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.687 [2024-12-03 10:46:03.117601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.687 [2024-12-03 10:46:03.117643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:32.687 [2024-12-03 10:46:03.117654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.900 ms 00:18:32.687 [2024-12-03 10:46:03.117661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.687 [2024-12-03 10:46:03.117701] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:32.687 [2024-12-03 10:46:03.117717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.117998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:32.687 [2024-12-03 10:46:03.118376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:32.688 [2024-12-03 10:46:03.118508] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:32.688 [2024-12-03 10:46:03.118516] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9da631bc-1ae5-44d2-a921-b83fcffa847e 00:18:32.688 [2024-12-03 10:46:03.118524] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:32.688 [2024-12-03 10:46:03.118531] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:32.688 [2024-12-03 10:46:03.118538] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:32.688 [2024-12-03 10:46:03.118547] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:32.688 [2024-12-03 10:46:03.118554] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:32.688 [2024-12-03 10:46:03.118564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:32.688 [2024-12-03 10:46:03.118571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:32.688 [2024-12-03 10:46:03.118578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:32.688 [2024-12-03 10:46:03.118591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:32.688 [2024-12-03 10:46:03.118598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.688 [2024-12-03 10:46:03.118606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:32.688 [2024-12-03 10:46:03.118615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:18:32.688 [2024-12-03 10:46:03.118624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.132166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.688 [2024-12-03 10:46:03.132209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:32.688 [2024-12-03 10:46:03.132220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.508 ms 00:18:32.688 [2024-12-03 10:46:03.132228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.132452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.688 [2024-12-03 10:46:03.132468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:32.688 [2024-12-03 10:46:03.132482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:18:32.688 [2024-12-03 10:46:03.132491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.171202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.688 [2024-12-03 10:46:03.171247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:32.688 [2024-12-03 10:46:03.171258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.688 [2024-12-03 10:46:03.171287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.171353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.688 [2024-12-03 10:46:03.171363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:32.688 [2024-12-03 10:46:03.171377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.688 [2024-12-03 10:46:03.171386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.171459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.688 [2024-12-03 10:46:03.171469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:32.688 [2024-12-03 10:46:03.171477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.688 [2024-12-03 10:46:03.171484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.171499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.688 [2024-12-03 10:46:03.171508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:32.688 [2024-12-03 10:46:03.171516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.688 [2024-12-03 10:46:03.171526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.252883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.688 [2024-12-03 10:46:03.252933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:32.688 [2024-12-03 10:46:03.252946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.688 [2024-12-03 10:46:03.252954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.285298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.688 [2024-12-03 10:46:03.285347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:32.688 [2024-12-03 10:46:03.285360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.688 [2024-12-03 10:46:03.285374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.285439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.688 [2024-12-03 10:46:03.285449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:32.688 [2024-12-03 10:46:03.285458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.688 [2024-12-03 10:46:03.285466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.285508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.688 [2024-12-03 10:46:03.285518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:32.688 [2024-12-03 10:46:03.285526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.688 [2024-12-03 10:46:03.285534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.285633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.688 [2024-12-03 10:46:03.285644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:32.688 [2024-12-03 10:46:03.285652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.688 [2024-12-03 10:46:03.285659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.285691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.688 [2024-12-03 10:46:03.285700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:32.688 [2024-12-03 10:46:03.285708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.688 [2024-12-03 10:46:03.285716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.285760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.688 [2024-12-03 10:46:03.285770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:32.688 [2024-12-03 10:46:03.285778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.688 [2024-12-03 10:46:03.285786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.285834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.688 [2024-12-03 10:46:03.285844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:32.688 [2024-12-03 10:46:03.285852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.688 [2024-12-03 10:46:03.285860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.688 [2024-12-03 10:46:03.285996] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.791 ms, result 0 00:18:33.629 00:18:33.629 00:18:33.629 10:46:04 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:33.889 [2024-12-03 10:46:04.262300] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:33.889 [2024-12-03 10:46:04.262440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73929 ] 00:18:33.889 [2024-12-03 10:46:04.414200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.148 [2024-12-03 10:46:04.625871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.406 [2024-12-03 10:46:04.911156] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:34.406 [2024-12-03 10:46:04.911239] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:34.672 [2024-12-03 10:46:05.066372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.672 [2024-12-03 10:46:05.066430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:34.672 [2024-12-03 10:46:05.066445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:34.672 [2024-12-03 10:46:05.066458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.672 [2024-12-03 10:46:05.066511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.672 [2024-12-03 10:46:05.066522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:34.672 [2024-12-03 10:46:05.066532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:34.672 [2024-12-03 10:46:05.066540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.672 [2024-12-03 10:46:05.066560] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:34.672 [2024-12-03 10:46:05.067347] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:34.672 [2024-12-03 10:46:05.067375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.672 [2024-12-03 10:46:05.067384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:34.672 [2024-12-03 10:46:05.067393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:18:34.672 [2024-12-03 10:46:05.067401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.672 [2024-12-03 10:46:05.069041] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:34.672 [2024-12-03 10:46:05.083155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.672 [2024-12-03 10:46:05.083202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:34.672 [2024-12-03 10:46:05.083217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.116 ms 00:18:34.672 [2024-12-03 10:46:05.083226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.672 [2024-12-03 10:46:05.083308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.672 [2024-12-03 10:46:05.083318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:34.672 [2024-12-03 10:46:05.083327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:34.672 [2024-12-03 10:46:05.083335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.672 [2024-12-03 10:46:05.091539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.672 [2024-12-03 10:46:05.091581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:34.672 [2024-12-03 10:46:05.091592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.125 ms 00:18:34.672 [2024-12-03 10:46:05.091600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.672 [2024-12-03 10:46:05.091692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.672 [2024-12-03 10:46:05.091702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:34.672 [2024-12-03 10:46:05.091710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:34.672 [2024-12-03 10:46:05.091718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.672 [2024-12-03 10:46:05.091763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.672 [2024-12-03 10:46:05.091773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:34.672 [2024-12-03 10:46:05.091782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:34.672 [2024-12-03 10:46:05.091790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.672 [2024-12-03 10:46:05.091822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:34.672 [2024-12-03 10:46:05.095971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.672 [2024-12-03 10:46:05.096009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:34.672 [2024-12-03 10:46:05.096020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.163 ms 00:18:34.672 [2024-12-03 10:46:05.096028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.672 [2024-12-03 10:46:05.096078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.672 [2024-12-03 10:46:05.096087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:34.672 [2024-12-03 10:46:05.096097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:34.672 [2024-12-03 10:46:05.096107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.672 [2024-12-03 10:46:05.096156] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:34.672 [2024-12-03 10:46:05.096179] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:34.672 [2024-12-03 10:46:05.096215] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:34.672 [2024-12-03 10:46:05.096232] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:34.672 [2024-12-03 10:46:05.096308] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:34.672 [2024-12-03 10:46:05.096326] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:34.672 [2024-12-03 10:46:05.096340] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:34.672 [2024-12-03 10:46:05.096350] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:34.672 [2024-12-03 10:46:05.096358] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:34.672 [2024-12-03 10:46:05.096367] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:34.672 [2024-12-03 10:46:05.096374] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:34.672 [2024-12-03 10:46:05.096382] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:34.672 [2024-12-03 10:46:05.096389] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:34.672 [2024-12-03 10:46:05.096397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.672 [2024-12-03 10:46:05.096405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:34.672 [2024-12-03 10:46:05.096413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:18:34.672 [2024-12-03 10:46:05.096421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.672 [2024-12-03 10:46:05.096487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.672 [2024-12-03 10:46:05.096495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:34.672 [2024-12-03 10:46:05.096503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:34.672 [2024-12-03 10:46:05.096511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.672 [2024-12-03 10:46:05.096580] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:34.672 [2024-12-03 10:46:05.096590] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:34.672 [2024-12-03 10:46:05.096598] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:34.672 [2024-12-03 10:46:05.096606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.672 [2024-12-03 10:46:05.096613] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:34.672 [2024-12-03 10:46:05.096620] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:34.672 [2024-12-03 10:46:05.096627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:34.672 [2024-12-03 10:46:05.096633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:34.672 [2024-12-03 10:46:05.096640] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:34.672 [2024-12-03 10:46:05.096647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:34.672 [2024-12-03 10:46:05.096653] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:34.673 [2024-12-03 10:46:05.096661] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:34.673 [2024-12-03 10:46:05.096668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:34.673 [2024-12-03 10:46:05.096675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:34.673 [2024-12-03 10:46:05.096682] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:34.673 [2024-12-03 10:46:05.096689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.673 [2024-12-03 10:46:05.096703] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:34.673 [2024-12-03 10:46:05.096710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:34.673 [2024-12-03 10:46:05.096717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.673 [2024-12-03 10:46:05.096723] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:34.673 [2024-12-03 10:46:05.096730] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:34.673 [2024-12-03 10:46:05.096738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:34.673 [2024-12-03 10:46:05.096745] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:34.673 [2024-12-03 10:46:05.096751] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:34.673 [2024-12-03 10:46:05.096758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:34.673 [2024-12-03 10:46:05.096765] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:34.673 [2024-12-03 10:46:05.096771] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:34.673 [2024-12-03 10:46:05.096778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:34.673 [2024-12-03 10:46:05.096785] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:34.673 [2024-12-03 10:46:05.096791] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:34.673 [2024-12-03 10:46:05.096798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:34.673 [2024-12-03 10:46:05.096804] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:34.673 [2024-12-03 10:46:05.096810] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:34.673 [2024-12-03 10:46:05.096817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:34.673 [2024-12-03 10:46:05.096824] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:34.673 [2024-12-03 10:46:05.096830] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:34.673 [2024-12-03 10:46:05.096836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:34.673 [2024-12-03 10:46:05.096843] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:34.673 [2024-12-03 10:46:05.096849] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:34.673 [2024-12-03 10:46:05.096855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:34.673 [2024-12-03 10:46:05.096862] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:34.673 [2024-12-03 10:46:05.096873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:34.673 [2024-12-03 10:46:05.096881] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:34.673 [2024-12-03 10:46:05.096890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.673 [2024-12-03 10:46:05.096898] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:34.673 [2024-12-03 10:46:05.096905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:34.673 [2024-12-03 10:46:05.096911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:34.673 [2024-12-03 10:46:05.096918] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:34.673 [2024-12-03 10:46:05.096924] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:34.673 [2024-12-03 10:46:05.096931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:34.673 [2024-12-03 10:46:05.096939] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:34.673 [2024-12-03 10:46:05.096949] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:34.673 [2024-12-03 10:46:05.096957] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:34.673 [2024-12-03 10:46:05.096965] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:34.673 [2024-12-03 10:46:05.096972] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:34.673 [2024-12-03 10:46:05.096979] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:34.673 [2024-12-03 10:46:05.096987] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:34.673 [2024-12-03 10:46:05.096995] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:34.673 [2024-12-03 10:46:05.097002] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:34.673 [2024-12-03 10:46:05.097009] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:34.673 [2024-12-03 10:46:05.097016] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:34.673 [2024-12-03 10:46:05.097023] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:34.673 [2024-12-03 10:46:05.097031] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:34.673 [2024-12-03 10:46:05.097038] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:34.673 [2024-12-03 10:46:05.097045] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:34.673 [2024-12-03 10:46:05.097066] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:34.673 [2024-12-03 10:46:05.097074] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:34.673 [2024-12-03 10:46:05.097083] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:34.673 [2024-12-03 10:46:05.097090] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:34.673 [2024-12-03 10:46:05.097097] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:34.673 [2024-12-03 10:46:05.097103] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:34.673 [2024-12-03 10:46:05.097111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.673 [2024-12-03 10:46:05.097119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:34.673 [2024-12-03 10:46:05.097127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:18:34.673 [2024-12-03 10:46:05.097135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.673 [2024-12-03 10:46:05.115258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.673 [2024-12-03 10:46:05.115329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:34.673 [2024-12-03 10:46:05.115341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.083 ms 00:18:34.673 [2024-12-03 10:46:05.115356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.673 [2024-12-03 10:46:05.115448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.673 [2024-12-03 10:46:05.115458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:34.673 [2024-12-03 10:46:05.115468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:34.673 [2024-12-03 10:46:05.115478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.673 [2024-12-03 10:46:05.161461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.673 [2024-12-03 10:46:05.161637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:34.673 [2024-12-03 10:46:05.161658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.931 ms 00:18:34.673 [2024-12-03 10:46:05.161668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.673 [2024-12-03 10:46:05.161719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.673 [2024-12-03 10:46:05.161730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:34.673 [2024-12-03 10:46:05.161739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:34.673 [2024-12-03 10:46:05.161747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.673 [2024-12-03 10:46:05.162319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.673 [2024-12-03 10:46:05.162363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:34.673 [2024-12-03 10:46:05.162374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:18:34.673 [2024-12-03 10:46:05.162389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.673 [2024-12-03 10:46:05.162517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.673 [2024-12-03 10:46:05.162529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:34.673 [2024-12-03 10:46:05.162537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:18:34.673 [2024-12-03 10:46:05.162545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.673 [2024-12-03 10:46:05.178908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.673 [2024-12-03 10:46:05.178953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:34.673 [2024-12-03 10:46:05.178964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.338 ms 00:18:34.673 [2024-12-03 10:46:05.178972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.673 [2024-12-03 10:46:05.193265] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:34.673 [2024-12-03 10:46:05.193314] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:34.673 [2024-12-03 10:46:05.193327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.673 [2024-12-03 10:46:05.193336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:34.673 [2024-12-03 10:46:05.193346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.219 ms 00:18:34.673 [2024-12-03 10:46:05.193354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.673 [2024-12-03 10:46:05.219472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.673 [2024-12-03 10:46:05.219522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:34.673 [2024-12-03 10:46:05.219534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.066 ms 00:18:34.673 [2024-12-03 10:46:05.219543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.673 [2024-12-03 10:46:05.232531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.673 [2024-12-03 10:46:05.232575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:34.673 [2024-12-03 10:46:05.232587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.938 ms 00:18:34.673 [2024-12-03 10:46:05.232596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.673 [2024-12-03 10:46:05.245159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.673 [2024-12-03 10:46:05.245226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:34.673 [2024-12-03 10:46:05.245238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.520 ms 00:18:34.673 [2024-12-03 10:46:05.245246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.673 [2024-12-03 10:46:05.245633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.673 [2024-12-03 10:46:05.245658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:34.673 [2024-12-03 10:46:05.245668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:18:34.673 [2024-12-03 10:46:05.245678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.933 [2024-12-03 10:46:05.311579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.933 [2024-12-03 10:46:05.311637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:34.933 [2024-12-03 10:46:05.311652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.879 ms 00:18:34.933 [2024-12-03 10:46:05.311661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.933 [2024-12-03 10:46:05.322756] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:34.933 [2024-12-03 10:46:05.325783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.933 [2024-12-03 10:46:05.325825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:34.933 [2024-12-03 10:46:05.325836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.066 ms 00:18:34.933 [2024-12-03 10:46:05.325850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.933 [2024-12-03 10:46:05.325918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.933 [2024-12-03 10:46:05.325929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:34.933 [2024-12-03 10:46:05.325938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:34.933 [2024-12-03 10:46:05.325947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.933 [2024-12-03 10:46:05.326014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.933 [2024-12-03 10:46:05.326026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:34.933 [2024-12-03 10:46:05.326035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:34.933 [2024-12-03 10:46:05.326043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.933 [2024-12-03 10:46:05.327422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.933 [2024-12-03 10:46:05.327463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:34.933 [2024-12-03 10:46:05.327473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.330 ms 00:18:34.933 [2024-12-03 10:46:05.327482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.933 [2024-12-03 10:46:05.327515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.933 [2024-12-03 10:46:05.327524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:34.933 [2024-12-03 10:46:05.327538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:34.933 [2024-12-03 10:46:05.327546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.933 [2024-12-03 10:46:05.327583] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:34.933 [2024-12-03 10:46:05.327593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.933 [2024-12-03 10:46:05.327605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:34.933 [2024-12-03 10:46:05.327613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:34.933 [2024-12-03 10:46:05.327623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.933 [2024-12-03 10:46:05.353278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.933 [2024-12-03 10:46:05.353328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:34.933 [2024-12-03 10:46:05.353341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.636 ms 00:18:34.933 [2024-12-03 10:46:05.353350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.933 [2024-12-03 10:46:05.353434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.933 [2024-12-03 10:46:05.353445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:34.933 [2024-12-03 10:46:05.353453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:34.933 [2024-12-03 10:46:05.353462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.933 [2024-12-03 10:46:05.354657] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 287.808 ms, result 0 00:18:36.314  [2024-12-03T10:46:07.869Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-03T10:46:08.815Z] Copying: 24/1024 [MB] (11 MBps) [2024-12-03T10:46:09.757Z] Copying: 34/1024 [MB] (10 MBps) [2024-12-03T10:46:10.700Z] Copying: 46/1024 [MB] (11 MBps) [2024-12-03T10:46:11.642Z] Copying: 62/1024 [MB] (16 MBps) [2024-12-03T10:46:12.583Z] Copying: 79/1024 [MB] (16 MBps) [2024-12-03T10:46:13.970Z] Copying: 91/1024 [MB] (11 MBps) [2024-12-03T10:46:14.542Z] Copying: 103/1024 [MB] (12 MBps) [2024-12-03T10:46:15.926Z] Copying: 116/1024 [MB] (12 MBps) [2024-12-03T10:46:16.868Z] Copying: 133/1024 [MB] (17 MBps) [2024-12-03T10:46:17.811Z] Copying: 155/1024 [MB] (22 MBps) [2024-12-03T10:46:18.753Z] Copying: 176/1024 [MB] (20 MBps) [2024-12-03T10:46:19.695Z] Copying: 196/1024 [MB] (20 MBps) [2024-12-03T10:46:20.639Z] Copying: 216/1024 [MB] (20 MBps) [2024-12-03T10:46:21.598Z] Copying: 229/1024 [MB] (12 MBps) [2024-12-03T10:46:22.619Z] Copying: 250/1024 [MB] (21 MBps) [2024-12-03T10:46:23.561Z] Copying: 269/1024 [MB] (18 MBps) [2024-12-03T10:46:24.948Z] Copying: 292/1024 [MB] (22 MBps) [2024-12-03T10:46:25.897Z] Copying: 315/1024 [MB] (23 MBps) [2024-12-03T10:46:26.838Z] Copying: 339/1024 [MB] (24 MBps) [2024-12-03T10:46:27.782Z] Copying: 360/1024 [MB] (21 MBps) [2024-12-03T10:46:28.723Z] Copying: 379/1024 [MB] (18 MBps) [2024-12-03T10:46:29.665Z] Copying: 391/1024 [MB] (11 MBps) [2024-12-03T10:46:30.607Z] Copying: 405/1024 [MB] (14 MBps) [2024-12-03T10:46:31.549Z] Copying: 423/1024 [MB] (17 MBps) [2024-12-03T10:46:32.932Z] Copying: 434/1024 [MB] (10 MBps) [2024-12-03T10:46:33.875Z] Copying: 444/1024 [MB] (10 MBps) [2024-12-03T10:46:34.818Z] Copying: 456/1024 [MB] (11 MBps) [2024-12-03T10:46:35.760Z] Copying: 466/1024 [MB] (10 MBps) [2024-12-03T10:46:36.703Z] Copying: 477/1024 [MB] (10 MBps) [2024-12-03T10:46:37.648Z] Copying: 497/1024 [MB] (20 MBps) [2024-12-03T10:46:38.592Z] Copying: 508/1024 [MB] (10 MBps) [2024-12-03T10:46:39.976Z] Copying: 523/1024 [MB] (15 MBps) [2024-12-03T10:46:40.546Z] Copying: 536/1024 [MB] (12 MBps) [2024-12-03T10:46:41.929Z] Copying: 548/1024 [MB] (11 MBps) [2024-12-03T10:46:42.873Z] Copying: 559/1024 [MB] (11 MBps) [2024-12-03T10:46:43.815Z] Copying: 576/1024 [MB] (17 MBps) [2024-12-03T10:46:44.760Z] Copying: 597/1024 [MB] (21 MBps) [2024-12-03T10:46:45.703Z] Copying: 613/1024 [MB] (15 MBps) [2024-12-03T10:46:46.647Z] Copying: 632/1024 [MB] (19 MBps) [2024-12-03T10:46:47.591Z] Copying: 653/1024 [MB] (20 MBps) [2024-12-03T10:46:48.979Z] Copying: 674/1024 [MB] (21 MBps) [2024-12-03T10:46:49.552Z] Copying: 692/1024 [MB] (17 MBps) [2024-12-03T10:46:50.934Z] Copying: 710/1024 [MB] (18 MBps) [2024-12-03T10:46:51.909Z] Copying: 732/1024 [MB] (21 MBps) [2024-12-03T10:46:52.854Z] Copying: 750/1024 [MB] (18 MBps) [2024-12-03T10:46:53.801Z] Copying: 769/1024 [MB] (18 MBps) [2024-12-03T10:46:54.793Z] Copying: 794/1024 [MB] (25 MBps) [2024-12-03T10:46:55.737Z] Copying: 814/1024 [MB] (19 MBps) [2024-12-03T10:46:56.681Z] Copying: 831/1024 [MB] (16 MBps) [2024-12-03T10:46:57.627Z] Copying: 847/1024 [MB] (16 MBps) [2024-12-03T10:46:58.572Z] Copying: 870/1024 [MB] (22 MBps) [2024-12-03T10:46:59.963Z] Copying: 885/1024 [MB] (14 MBps) [2024-12-03T10:47:00.903Z] Copying: 897/1024 [MB] (12 MBps) [2024-12-03T10:47:01.848Z] Copying: 912/1024 [MB] (14 MBps) [2024-12-03T10:47:02.791Z] Copying: 923/1024 [MB] (10 MBps) [2024-12-03T10:47:03.736Z] Copying: 948/1024 [MB] (25 MBps) [2024-12-03T10:47:04.682Z] Copying: 967/1024 [MB] (18 MBps) [2024-12-03T10:47:05.640Z] Copying: 984/1024 [MB] (17 MBps) [2024-12-03T10:47:06.584Z] Copying: 996/1024 [MB] (11 MBps) [2024-12-03T10:47:07.530Z] Copying: 1010/1024 [MB] (14 MBps) [2024-12-03T10:47:07.530Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-03 10:47:07.455126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.917 [2024-12-03 10:47:07.455585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:36.917 [2024-12-03 10:47:07.455742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:36.918 [2024-12-03 10:47:07.455798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.918 [2024-12-03 10:47:07.455885] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:36.918 [2024-12-03 10:47:07.463927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.918 [2024-12-03 10:47:07.464112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:36.918 [2024-12-03 10:47:07.464337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.695 ms 00:19:36.918 [2024-12-03 10:47:07.464385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.918 [2024-12-03 10:47:07.464690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.918 [2024-12-03 10:47:07.464731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:36.918 [2024-12-03 10:47:07.464753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:19:36.918 [2024-12-03 10:47:07.464827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.918 [2024-12-03 10:47:07.468329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.918 [2024-12-03 10:47:07.468444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:36.918 [2024-12-03 10:47:07.468516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.471 ms 00:19:36.918 [2024-12-03 10:47:07.468539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.918 [2024-12-03 10:47:07.474750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.918 [2024-12-03 10:47:07.474911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:36.918 [2024-12-03 10:47:07.474981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:19:36.918 [2024-12-03 10:47:07.474995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.918 [2024-12-03 10:47:07.502975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.918 [2024-12-03 10:47:07.503196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:36.918 [2024-12-03 10:47:07.503427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.858 ms 00:19:36.918 [2024-12-03 10:47:07.503456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.918 [2024-12-03 10:47:07.520539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.918 [2024-12-03 10:47:07.520717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:36.918 [2024-12-03 10:47:07.520782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.031 ms 00:19:36.918 [2024-12-03 10:47:07.520816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.918 [2024-12-03 10:47:07.521468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.918 [2024-12-03 10:47:07.521649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:36.918 [2024-12-03 10:47:07.521718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:19:36.918 [2024-12-03 10:47:07.521744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.180 [2024-12-03 10:47:07.548779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.180 [2024-12-03 10:47:07.548952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:37.180 [2024-12-03 10:47:07.549015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.001 ms 00:19:37.180 [2024-12-03 10:47:07.549038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.180 [2024-12-03 10:47:07.574826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.180 [2024-12-03 10:47:07.574995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:37.180 [2024-12-03 10:47:07.575097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.656 ms 00:19:37.180 [2024-12-03 10:47:07.575122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.180 [2024-12-03 10:47:07.600399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.180 [2024-12-03 10:47:07.600561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:37.180 [2024-12-03 10:47:07.600622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.227 ms 00:19:37.180 [2024-12-03 10:47:07.600645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.180 [2024-12-03 10:47:07.625829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.180 [2024-12-03 10:47:07.625998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:37.180 [2024-12-03 10:47:07.626072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.909 ms 00:19:37.180 [2024-12-03 10:47:07.626096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.180 [2024-12-03 10:47:07.626168] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:37.180 [2024-12-03 10:47:07.626208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.626241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.626329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.626363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.626393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.626421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.627948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.628011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:37.180 [2024-12-03 10:47:07.628041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:37.181 [2024-12-03 10:47:07.628839] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:37.181 [2024-12-03 10:47:07.628848] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9da631bc-1ae5-44d2-a921-b83fcffa847e 00:19:37.181 [2024-12-03 10:47:07.628857] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:37.181 [2024-12-03 10:47:07.628864] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:37.181 [2024-12-03 10:47:07.628872] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:37.181 [2024-12-03 10:47:07.628882] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:37.181 [2024-12-03 10:47:07.628890] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:37.181 [2024-12-03 10:47:07.628898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:37.181 [2024-12-03 10:47:07.628906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:37.181 [2024-12-03 10:47:07.628923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:37.181 [2024-12-03 10:47:07.628930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:37.181 [2024-12-03 10:47:07.628940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.181 [2024-12-03 10:47:07.628951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:37.181 [2024-12-03 10:47:07.628965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.773 ms 00:19:37.181 [2024-12-03 10:47:07.628973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.181 [2024-12-03 10:47:07.642664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.181 [2024-12-03 10:47:07.642820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:37.181 [2024-12-03 10:47:07.642873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.635 ms 00:19:37.181 [2024-12-03 10:47:07.642896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.181 [2024-12-03 10:47:07.643162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.181 [2024-12-03 10:47:07.643233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:37.181 [2024-12-03 10:47:07.643738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:19:37.181 [2024-12-03 10:47:07.643768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.181 [2024-12-03 10:47:07.682966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.181 [2024-12-03 10:47:07.683020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.182 [2024-12-03 10:47:07.683032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.182 [2024-12-03 10:47:07.683040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.182 [2024-12-03 10:47:07.683120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.182 [2024-12-03 10:47:07.683136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.182 [2024-12-03 10:47:07.683146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.182 [2024-12-03 10:47:07.683153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.182 [2024-12-03 10:47:07.683234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.182 [2024-12-03 10:47:07.683248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.182 [2024-12-03 10:47:07.683272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.182 [2024-12-03 10:47:07.683280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.182 [2024-12-03 10:47:07.683298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.182 [2024-12-03 10:47:07.683306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.182 [2024-12-03 10:47:07.683319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.182 [2024-12-03 10:47:07.683327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.182 [2024-12-03 10:47:07.764811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.182 [2024-12-03 10:47:07.764869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.182 [2024-12-03 10:47:07.764881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.182 [2024-12-03 10:47:07.764890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.443 [2024-12-03 10:47:07.797358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.443 [2024-12-03 10:47:07.797407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.443 [2024-12-03 10:47:07.797425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.443 [2024-12-03 10:47:07.797433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.443 [2024-12-03 10:47:07.797500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.443 [2024-12-03 10:47:07.797510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:37.443 [2024-12-03 10:47:07.797519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.443 [2024-12-03 10:47:07.797527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.443 [2024-12-03 10:47:07.797570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.443 [2024-12-03 10:47:07.797581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:37.443 [2024-12-03 10:47:07.797590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.443 [2024-12-03 10:47:07.797603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.443 [2024-12-03 10:47:07.797701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.443 [2024-12-03 10:47:07.797713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:37.443 [2024-12-03 10:47:07.797721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.443 [2024-12-03 10:47:07.797729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.443 [2024-12-03 10:47:07.797765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.443 [2024-12-03 10:47:07.797777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:37.443 [2024-12-03 10:47:07.797785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.443 [2024-12-03 10:47:07.797793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.443 [2024-12-03 10:47:07.797839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.443 [2024-12-03 10:47:07.797848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:37.443 [2024-12-03 10:47:07.797856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.443 [2024-12-03 10:47:07.797864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.443 [2024-12-03 10:47:07.797914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.443 [2024-12-03 10:47:07.797926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:37.443 [2024-12-03 10:47:07.797936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.443 [2024-12-03 10:47:07.797949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.443 [2024-12-03 10:47:07.798112] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.990 ms, result 0 00:19:38.386 00:19:38.386 00:19:38.386 10:47:08 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:40.933 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:40.933 10:47:10 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:40.933 [2024-12-03 10:47:11.006786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:40.933 [2024-12-03 10:47:11.006929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74618 ] 00:19:40.933 [2024-12-03 10:47:11.155384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.933 [2024-12-03 10:47:11.372317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.194 [2024-12-03 10:47:11.659685] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:41.194 [2024-12-03 10:47:11.659765] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:41.456 [2024-12-03 10:47:11.814221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.456 [2024-12-03 10:47:11.814281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:41.456 [2024-12-03 10:47:11.814296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:41.456 [2024-12-03 10:47:11.814307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.456 [2024-12-03 10:47:11.814362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.456 [2024-12-03 10:47:11.814373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:41.456 [2024-12-03 10:47:11.814382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:41.456 [2024-12-03 10:47:11.814391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.456 [2024-12-03 10:47:11.814411] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:41.456 [2024-12-03 10:47:11.815181] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:41.456 [2024-12-03 10:47:11.815209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.456 [2024-12-03 10:47:11.815218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:41.456 [2024-12-03 10:47:11.815228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:19:41.456 [2024-12-03 10:47:11.815236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.456 [2024-12-03 10:47:11.816889] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:41.456 [2024-12-03 10:47:11.831363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.456 [2024-12-03 10:47:11.831413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:41.456 [2024-12-03 10:47:11.831428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.476 ms 00:19:41.456 [2024-12-03 10:47:11.831436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.456 [2024-12-03 10:47:11.831510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.456 [2024-12-03 10:47:11.831520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:41.456 [2024-12-03 10:47:11.831529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:41.456 [2024-12-03 10:47:11.831537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.456 [2024-12-03 10:47:11.839911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.456 [2024-12-03 10:47:11.839956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:41.456 [2024-12-03 10:47:11.839967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.295 ms 00:19:41.456 [2024-12-03 10:47:11.839976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.456 [2024-12-03 10:47:11.840092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.456 [2024-12-03 10:47:11.840103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:41.456 [2024-12-03 10:47:11.840112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:41.456 [2024-12-03 10:47:11.840120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.456 [2024-12-03 10:47:11.840166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.456 [2024-12-03 10:47:11.840179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:41.456 [2024-12-03 10:47:11.840188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:41.456 [2024-12-03 10:47:11.840196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.456 [2024-12-03 10:47:11.840228] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:41.456 [2024-12-03 10:47:11.844403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.456 [2024-12-03 10:47:11.844448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:41.456 [2024-12-03 10:47:11.844459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.188 ms 00:19:41.456 [2024-12-03 10:47:11.844467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.456 [2024-12-03 10:47:11.844505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.456 [2024-12-03 10:47:11.844513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:41.456 [2024-12-03 10:47:11.844522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:41.456 [2024-12-03 10:47:11.844533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.456 [2024-12-03 10:47:11.844584] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:41.456 [2024-12-03 10:47:11.844608] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:41.456 [2024-12-03 10:47:11.844644] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:41.456 [2024-12-03 10:47:11.844661] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:41.456 [2024-12-03 10:47:11.844737] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:41.456 [2024-12-03 10:47:11.844749] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:41.456 [2024-12-03 10:47:11.844763] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:41.456 [2024-12-03 10:47:11.844773] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:41.456 [2024-12-03 10:47:11.844782] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:41.456 [2024-12-03 10:47:11.844791] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:41.456 [2024-12-03 10:47:11.844799] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:41.456 [2024-12-03 10:47:11.844807] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:41.456 [2024-12-03 10:47:11.844815] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:41.457 [2024-12-03 10:47:11.844826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.457 [2024-12-03 10:47:11.844835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:41.457 [2024-12-03 10:47:11.844843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:19:41.457 [2024-12-03 10:47:11.844850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.457 [2024-12-03 10:47:11.844915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.457 [2024-12-03 10:47:11.844927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:41.457 [2024-12-03 10:47:11.844935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:41.457 [2024-12-03 10:47:11.844942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.457 [2024-12-03 10:47:11.845014] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:41.457 [2024-12-03 10:47:11.845036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:41.457 [2024-12-03 10:47:11.845045] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:41.457 [2024-12-03 10:47:11.845086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.457 [2024-12-03 10:47:11.845097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:41.457 [2024-12-03 10:47:11.845105] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:41.457 [2024-12-03 10:47:11.845114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:41.457 [2024-12-03 10:47:11.845122] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:41.457 [2024-12-03 10:47:11.845130] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:41.457 [2024-12-03 10:47:11.845137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:41.457 [2024-12-03 10:47:11.845144] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:41.457 [2024-12-03 10:47:11.845154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:41.457 [2024-12-03 10:47:11.845161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:41.457 [2024-12-03 10:47:11.845169] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:41.457 [2024-12-03 10:47:11.845176] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:41.457 [2024-12-03 10:47:11.845184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.457 [2024-12-03 10:47:11.845197] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:41.457 [2024-12-03 10:47:11.845204] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:41.457 [2024-12-03 10:47:11.845211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.457 [2024-12-03 10:47:11.845218] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:41.457 [2024-12-03 10:47:11.845226] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:41.457 [2024-12-03 10:47:11.845232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:41.457 [2024-12-03 10:47:11.845239] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:41.457 [2024-12-03 10:47:11.845246] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:41.457 [2024-12-03 10:47:11.845252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:41.457 [2024-12-03 10:47:11.845258] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:41.457 [2024-12-03 10:47:11.845265] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:41.457 [2024-12-03 10:47:11.845271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:41.457 [2024-12-03 10:47:11.845280] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:41.457 [2024-12-03 10:47:11.845286] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:41.457 [2024-12-03 10:47:11.845293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:41.457 [2024-12-03 10:47:11.845300] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:41.457 [2024-12-03 10:47:11.845306] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:41.457 [2024-12-03 10:47:11.845314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:41.457 [2024-12-03 10:47:11.845322] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:41.457 [2024-12-03 10:47:11.845328] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:41.457 [2024-12-03 10:47:11.845334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:41.457 [2024-12-03 10:47:11.845341] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:41.457 [2024-12-03 10:47:11.845348] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:41.457 [2024-12-03 10:47:11.845354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:41.457 [2024-12-03 10:47:11.845360] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:41.457 [2024-12-03 10:47:11.845372] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:41.457 [2024-12-03 10:47:11.845380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:41.457 [2024-12-03 10:47:11.845388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.457 [2024-12-03 10:47:11.845396] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:41.457 [2024-12-03 10:47:11.845403] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:41.457 [2024-12-03 10:47:11.845410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:41.457 [2024-12-03 10:47:11.845417] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:41.457 [2024-12-03 10:47:11.845423] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:41.457 [2024-12-03 10:47:11.845431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:41.457 [2024-12-03 10:47:11.845439] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:41.457 [2024-12-03 10:47:11.845448] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:41.457 [2024-12-03 10:47:11.845457] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:41.457 [2024-12-03 10:47:11.845464] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:41.457 [2024-12-03 10:47:11.845471] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:41.457 [2024-12-03 10:47:11.845478] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:41.457 [2024-12-03 10:47:11.845485] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:41.457 [2024-12-03 10:47:11.845492] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:41.457 [2024-12-03 10:47:11.845503] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:41.457 [2024-12-03 10:47:11.845510] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:41.457 [2024-12-03 10:47:11.845517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:41.457 [2024-12-03 10:47:11.845524] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:41.457 [2024-12-03 10:47:11.845532] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:41.457 [2024-12-03 10:47:11.845539] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:41.457 [2024-12-03 10:47:11.845548] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:41.457 [2024-12-03 10:47:11.845556] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:41.457 [2024-12-03 10:47:11.845565] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:41.457 [2024-12-03 10:47:11.845572] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:41.457 [2024-12-03 10:47:11.845579] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:41.457 [2024-12-03 10:47:11.845586] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:41.457 [2024-12-03 10:47:11.845594] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:41.457 [2024-12-03 10:47:11.845602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.457 [2024-12-03 10:47:11.845611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:41.457 [2024-12-03 10:47:11.845619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:19:41.457 [2024-12-03 10:47:11.845626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.457 [2024-12-03 10:47:11.864002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.457 [2024-12-03 10:47:11.864051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:41.457 [2024-12-03 10:47:11.864082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.332 ms 00:19:41.457 [2024-12-03 10:47:11.864097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.457 [2024-12-03 10:47:11.864188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.457 [2024-12-03 10:47:11.864199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:41.457 [2024-12-03 10:47:11.864208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:41.457 [2024-12-03 10:47:11.864216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.457 [2024-12-03 10:47:11.910471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.457 [2024-12-03 10:47:11.910527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:41.457 [2024-12-03 10:47:11.910540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.203 ms 00:19:41.457 [2024-12-03 10:47:11.910548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.457 [2024-12-03 10:47:11.910599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.457 [2024-12-03 10:47:11.910609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.458 [2024-12-03 10:47:11.910618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:41.458 [2024-12-03 10:47:11.910627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.458 [2024-12-03 10:47:11.911251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.458 [2024-12-03 10:47:11.911298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.458 [2024-12-03 10:47:11.911309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:19:41.458 [2024-12-03 10:47:11.911324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.458 [2024-12-03 10:47:11.911454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.458 [2024-12-03 10:47:11.911466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.458 [2024-12-03 10:47:11.911475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:41.458 [2024-12-03 10:47:11.911482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.458 [2024-12-03 10:47:11.933622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.458 [2024-12-03 10:47:11.933691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.458 [2024-12-03 10:47:11.933709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.109 ms 00:19:41.458 [2024-12-03 10:47:11.933720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.458 [2024-12-03 10:47:11.949459] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:41.458 [2024-12-03 10:47:11.949515] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:41.458 [2024-12-03 10:47:11.949531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.458 [2024-12-03 10:47:11.949541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:41.458 [2024-12-03 10:47:11.949552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.602 ms 00:19:41.458 [2024-12-03 10:47:11.949560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.458 [2024-12-03 10:47:11.975934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.458 [2024-12-03 10:47:11.975987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:41.458 [2024-12-03 10:47:11.976001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.306 ms 00:19:41.458 [2024-12-03 10:47:11.976011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.458 [2024-12-03 10:47:11.989084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.458 [2024-12-03 10:47:11.989133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:41.458 [2024-12-03 10:47:11.989146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.006 ms 00:19:41.458 [2024-12-03 10:47:11.989154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.458 [2024-12-03 10:47:12.002210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.458 [2024-12-03 10:47:12.002269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:41.458 [2024-12-03 10:47:12.002281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.007 ms 00:19:41.458 [2024-12-03 10:47:12.002289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.458 [2024-12-03 10:47:12.002701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.458 [2024-12-03 10:47:12.002787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:41.458 [2024-12-03 10:47:12.002799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:19:41.458 [2024-12-03 10:47:12.002809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.718 [2024-12-03 10:47:12.075542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.718 [2024-12-03 10:47:12.075608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:41.718 [2024-12-03 10:47:12.075624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.711 ms 00:19:41.718 [2024-12-03 10:47:12.075633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.718 [2024-12-03 10:47:12.087320] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:41.718 [2024-12-03 10:47:12.090926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.718 [2024-12-03 10:47:12.090970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:41.718 [2024-12-03 10:47:12.090982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.229 ms 00:19:41.718 [2024-12-03 10:47:12.090997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.718 [2024-12-03 10:47:12.091097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.718 [2024-12-03 10:47:12.091111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:41.718 [2024-12-03 10:47:12.091121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:41.718 [2024-12-03 10:47:12.091130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.718 [2024-12-03 10:47:12.091213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.718 [2024-12-03 10:47:12.091225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:41.718 [2024-12-03 10:47:12.091235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:41.718 [2024-12-03 10:47:12.091245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.718 [2024-12-03 10:47:12.092760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.718 [2024-12-03 10:47:12.092806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:41.718 [2024-12-03 10:47:12.092818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.465 ms 00:19:41.718 [2024-12-03 10:47:12.092826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.718 [2024-12-03 10:47:12.092865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.718 [2024-12-03 10:47:12.092875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:41.719 [2024-12-03 10:47:12.092891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:41.719 [2024-12-03 10:47:12.092899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.719 [2024-12-03 10:47:12.092942] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:41.719 [2024-12-03 10:47:12.092953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.719 [2024-12-03 10:47:12.092965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:41.719 [2024-12-03 10:47:12.092974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:41.719 [2024-12-03 10:47:12.092982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.719 [2024-12-03 10:47:12.120149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.719 [2024-12-03 10:47:12.120202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:41.719 [2024-12-03 10:47:12.120215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.145 ms 00:19:41.719 [2024-12-03 10:47:12.120224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.719 [2024-12-03 10:47:12.120323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.719 [2024-12-03 10:47:12.120334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:41.719 [2024-12-03 10:47:12.120344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:41.719 [2024-12-03 10:47:12.120353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.719 [2024-12-03 10:47:12.121748] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.969 ms, result 0 00:19:42.664  [2024-12-03T10:47:14.222Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-03T10:47:15.168Z] Copying: 33/1024 [MB] (16 MBps) [2024-12-03T10:47:16.554Z] Copying: 51/1024 [MB] (17 MBps) [2024-12-03T10:47:17.498Z] Copying: 64/1024 [MB] (13 MBps) [2024-12-03T10:47:18.444Z] Copying: 79/1024 [MB] (14 MBps) [2024-12-03T10:47:19.390Z] Copying: 95/1024 [MB] (15 MBps) [2024-12-03T10:47:20.337Z] Copying: 111/1024 [MB] (16 MBps) [2024-12-03T10:47:21.292Z] Copying: 129/1024 [MB] (17 MBps) [2024-12-03T10:47:22.228Z] Copying: 142/1024 [MB] (13 MBps) [2024-12-03T10:47:23.162Z] Copying: 153/1024 [MB] (11 MBps) [2024-12-03T10:47:24.548Z] Copying: 167/1024 [MB] (13 MBps) [2024-12-03T10:47:25.499Z] Copying: 178/1024 [MB] (10 MBps) [2024-12-03T10:47:26.148Z] Copying: 188/1024 [MB] (10 MBps) [2024-12-03T10:47:27.534Z] Copying: 199/1024 [MB] (10 MBps) [2024-12-03T10:47:28.476Z] Copying: 209/1024 [MB] (10 MBps) [2024-12-03T10:47:29.420Z] Copying: 219/1024 [MB] (10 MBps) [2024-12-03T10:47:30.363Z] Copying: 230/1024 [MB] (10 MBps) [2024-12-03T10:47:31.308Z] Copying: 240/1024 [MB] (10 MBps) [2024-12-03T10:47:32.253Z] Copying: 250/1024 [MB] (10 MBps) [2024-12-03T10:47:33.197Z] Copying: 266976/1048576 [kB] (10084 kBps) [2024-12-03T10:47:34.141Z] Copying: 270/1024 [MB] (10 MBps) [2024-12-03T10:47:35.528Z] Copying: 281/1024 [MB] (10 MBps) [2024-12-03T10:47:36.470Z] Copying: 291/1024 [MB] (10 MBps) [2024-12-03T10:47:37.413Z] Copying: 304/1024 [MB] (12 MBps) [2024-12-03T10:47:38.357Z] Copying: 314/1024 [MB] (10 MBps) [2024-12-03T10:47:39.302Z] Copying: 339/1024 [MB] (25 MBps) [2024-12-03T10:47:40.242Z] Copying: 388/1024 [MB] (48 MBps) [2024-12-03T10:47:41.190Z] Copying: 401/1024 [MB] (13 MBps) [2024-12-03T10:47:42.573Z] Copying: 415/1024 [MB] (14 MBps) [2024-12-03T10:47:43.145Z] Copying: 434/1024 [MB] (18 MBps) [2024-12-03T10:47:44.534Z] Copying: 456/1024 [MB] (21 MBps) [2024-12-03T10:47:45.477Z] Copying: 473/1024 [MB] (16 MBps) [2024-12-03T10:47:46.419Z] Copying: 491/1024 [MB] (18 MBps) [2024-12-03T10:47:47.364Z] Copying: 507/1024 [MB] (15 MBps) [2024-12-03T10:47:48.309Z] Copying: 522/1024 [MB] (15 MBps) [2024-12-03T10:47:49.252Z] Copying: 536/1024 [MB] (14 MBps) [2024-12-03T10:47:50.194Z] Copying: 553/1024 [MB] (16 MBps) [2024-12-03T10:47:51.139Z] Copying: 563/1024 [MB] (10 MBps) [2024-12-03T10:47:52.528Z] Copying: 573/1024 [MB] (10 MBps) [2024-12-03T10:47:53.471Z] Copying: 584/1024 [MB] (10 MBps) [2024-12-03T10:47:54.417Z] Copying: 594/1024 [MB] (10 MBps) [2024-12-03T10:47:55.361Z] Copying: 604/1024 [MB] (10 MBps) [2024-12-03T10:47:56.303Z] Copying: 641/1024 [MB] (36 MBps) [2024-12-03T10:47:57.321Z] Copying: 674/1024 [MB] (32 MBps) [2024-12-03T10:47:58.310Z] Copying: 684/1024 [MB] (10 MBps) [2024-12-03T10:47:59.255Z] Copying: 694/1024 [MB] (10 MBps) [2024-12-03T10:48:00.197Z] Copying: 705/1024 [MB] (10 MBps) [2024-12-03T10:48:01.139Z] Copying: 715/1024 [MB] (10 MBps) [2024-12-03T10:48:02.523Z] Copying: 731/1024 [MB] (15 MBps) [2024-12-03T10:48:03.465Z] Copying: 747/1024 [MB] (16 MBps) [2024-12-03T10:48:04.409Z] Copying: 760/1024 [MB] (12 MBps) [2024-12-03T10:48:05.351Z] Copying: 778/1024 [MB] (18 MBps) [2024-12-03T10:48:06.293Z] Copying: 793/1024 [MB] (15 MBps) [2024-12-03T10:48:07.238Z] Copying: 806/1024 [MB] (12 MBps) [2024-12-03T10:48:08.181Z] Copying: 817/1024 [MB] (11 MBps) [2024-12-03T10:48:09.567Z] Copying: 827/1024 [MB] (10 MBps) [2024-12-03T10:48:10.146Z] Copying: 838/1024 [MB] (10 MBps) [2024-12-03T10:48:11.532Z] Copying: 850/1024 [MB] (12 MBps) [2024-12-03T10:48:12.477Z] Copying: 862/1024 [MB] (11 MBps) [2024-12-03T10:48:13.450Z] Copying: 876/1024 [MB] (13 MBps) [2024-12-03T10:48:14.396Z] Copying: 892/1024 [MB] (16 MBps) [2024-12-03T10:48:15.339Z] Copying: 912/1024 [MB] (19 MBps) [2024-12-03T10:48:16.280Z] Copying: 923/1024 [MB] (11 MBps) [2024-12-03T10:48:17.226Z] Copying: 936/1024 [MB] (13 MBps) [2024-12-03T10:48:18.171Z] Copying: 952/1024 [MB] (16 MBps) [2024-12-03T10:48:19.557Z] Copying: 973/1024 [MB] (20 MBps) [2024-12-03T10:48:20.501Z] Copying: 989/1024 [MB] (16 MBps) [2024-12-03T10:48:21.440Z] Copying: 1007/1024 [MB] (17 MBps) [2024-12-03T10:48:22.381Z] Copying: 1041800/1048576 [kB] (10228 kBps) [2024-12-03T10:48:22.642Z] Copying: 1048056/1048576 [kB] (6256 kBps) [2024-12-03T10:48:22.642Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-03 10:48:22.528189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.029 [2024-12-03 10:48:22.528378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:52.029 [2024-12-03 10:48:22.528420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:52.029 [2024-12-03 10:48:22.528442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.029 [2024-12-03 10:48:22.530009] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:52.029 [2024-12-03 10:48:22.538822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.029 [2024-12-03 10:48:22.538964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:52.029 [2024-12-03 10:48:22.539003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.629 ms 00:20:52.029 [2024-12-03 10:48:22.539032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.029 [2024-12-03 10:48:22.552347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.029 [2024-12-03 10:48:22.552401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:52.029 [2024-12-03 10:48:22.552430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.777 ms 00:20:52.029 [2024-12-03 10:48:22.552440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.029 [2024-12-03 10:48:22.575073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.029 [2024-12-03 10:48:22.575119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:52.029 [2024-12-03 10:48:22.575132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.595 ms 00:20:52.029 [2024-12-03 10:48:22.575142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.029 [2024-12-03 10:48:22.581273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.029 [2024-12-03 10:48:22.581306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:52.029 [2024-12-03 10:48:22.581318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.097 ms 00:20:52.029 [2024-12-03 10:48:22.581334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.029 [2024-12-03 10:48:22.609143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.029 [2024-12-03 10:48:22.609184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:52.029 [2024-12-03 10:48:22.609197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.752 ms 00:20:52.029 [2024-12-03 10:48:22.609206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.029 [2024-12-03 10:48:22.626420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.029 [2024-12-03 10:48:22.626460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:52.029 [2024-12-03 10:48:22.626473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.168 ms 00:20:52.029 [2024-12-03 10:48:22.626482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.290 [2024-12-03 10:48:22.874113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.290 [2024-12-03 10:48:22.874155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:52.290 [2024-12-03 10:48:22.874168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 247.582 ms 00:20:52.290 [2024-12-03 10:48:22.874178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.290 [2024-12-03 10:48:22.900617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.290 [2024-12-03 10:48:22.900656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:52.290 [2024-12-03 10:48:22.900668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.417 ms 00:20:52.290 [2024-12-03 10:48:22.900677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.552 [2024-12-03 10:48:22.926324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.552 [2024-12-03 10:48:22.926361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:52.552 [2024-12-03 10:48:22.926387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.604 ms 00:20:52.552 [2024-12-03 10:48:22.926395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.552 [2024-12-03 10:48:22.951111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.552 [2024-12-03 10:48:22.951148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:52.552 [2024-12-03 10:48:22.951159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.673 ms 00:20:52.552 [2024-12-03 10:48:22.951167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.552 [2024-12-03 10:48:22.975783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.552 [2024-12-03 10:48:22.975821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:52.552 [2024-12-03 10:48:22.975833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.530 ms 00:20:52.552 [2024-12-03 10:48:22.975842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.552 [2024-12-03 10:48:22.975884] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:52.552 [2024-12-03 10:48:22.975901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 88320 / 261120 wr_cnt: 1 state: open 00:20:52.552 [2024-12-03 10:48:22.975913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.975923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.975932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.975940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.975949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.975958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.975966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.975975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.975984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.975993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.976001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.976009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.976018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.976026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.976035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.976044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.976065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.976073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.976081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.976090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.976098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.976106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:52.552 [2024-12-03 10:48:22.976114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:52.553 [2024-12-03 10:48:22.976770] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:52.553 [2024-12-03 10:48:22.976779] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9da631bc-1ae5-44d2-a921-b83fcffa847e 00:20:52.553 [2024-12-03 10:48:22.976789] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 88320 00:20:52.553 [2024-12-03 10:48:22.976797] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 89280 00:20:52.553 [2024-12-03 10:48:22.976804] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 88320 00:20:52.553 [2024-12-03 10:48:22.976819] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0109 00:20:52.553 [2024-12-03 10:48:22.976827] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:52.553 [2024-12-03 10:48:22.976835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:52.553 [2024-12-03 10:48:22.976843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:52.553 [2024-12-03 10:48:22.976857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:52.553 [2024-12-03 10:48:22.976870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:52.553 [2024-12-03 10:48:22.976879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.553 [2024-12-03 10:48:22.976888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:52.553 [2024-12-03 10:48:22.976897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:20:52.553 [2024-12-03 10:48:22.976905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.553 [2024-12-03 10:48:22.991080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.554 [2024-12-03 10:48:22.991121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:52.554 [2024-12-03 10:48:22.991132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.141 ms 00:20:52.554 [2024-12-03 10:48:22.991140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.554 [2024-12-03 10:48:22.991404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.554 [2024-12-03 10:48:22.991416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:52.554 [2024-12-03 10:48:22.991425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:20:52.554 [2024-12-03 10:48:22.991435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.554 [2024-12-03 10:48:23.033612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.554 [2024-12-03 10:48:23.033652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.554 [2024-12-03 10:48:23.033664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.554 [2024-12-03 10:48:23.033673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.554 [2024-12-03 10:48:23.033741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.554 [2024-12-03 10:48:23.033750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.554 [2024-12-03 10:48:23.033760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.554 [2024-12-03 10:48:23.033770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.554 [2024-12-03 10:48:23.033852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.554 [2024-12-03 10:48:23.033870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.554 [2024-12-03 10:48:23.033880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.554 [2024-12-03 10:48:23.033888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.554 [2024-12-03 10:48:23.033905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.554 [2024-12-03 10:48:23.033913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.554 [2024-12-03 10:48:23.033922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.554 [2024-12-03 10:48:23.033931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.554 [2024-12-03 10:48:23.119737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.554 [2024-12-03 10:48:23.119784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:52.554 [2024-12-03 10:48:23.119797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.554 [2024-12-03 10:48:23.119806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.554 [2024-12-03 10:48:23.154738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.554 [2024-12-03 10:48:23.154779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:52.554 [2024-12-03 10:48:23.154791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.554 [2024-12-03 10:48:23.154801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.554 [2024-12-03 10:48:23.154866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.554 [2024-12-03 10:48:23.154877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.554 [2024-12-03 10:48:23.154894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.554 [2024-12-03 10:48:23.154903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.554 [2024-12-03 10:48:23.154951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.554 [2024-12-03 10:48:23.154963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.554 [2024-12-03 10:48:23.154972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.554 [2024-12-03 10:48:23.154981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.554 [2024-12-03 10:48:23.155120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.554 [2024-12-03 10:48:23.155136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.554 [2024-12-03 10:48:23.155146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.554 [2024-12-03 10:48:23.155157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.554 [2024-12-03 10:48:23.155191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.554 [2024-12-03 10:48:23.155201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:52.554 [2024-12-03 10:48:23.155210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.554 [2024-12-03 10:48:23.155218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.554 [2024-12-03 10:48:23.155263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.554 [2024-12-03 10:48:23.155290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.554 [2024-12-03 10:48:23.155301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.554 [2024-12-03 10:48:23.155312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.554 [2024-12-03 10:48:23.155368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.554 [2024-12-03 10:48:23.155381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.554 [2024-12-03 10:48:23.155390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.554 [2024-12-03 10:48:23.155398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.554 [2024-12-03 10:48:23.155549] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 627.316 ms, result 0 00:20:54.470 00:20:54.470 00:20:54.470 10:48:24 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:54.470 [2024-12-03 10:48:24.825102] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:54.470 [2024-12-03 10:48:24.825237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75379 ] 00:20:54.470 [2024-12-03 10:48:24.979114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.731 [2024-12-03 10:48:25.202932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.991 [2024-12-03 10:48:25.491257] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.991 [2024-12-03 10:48:25.491360] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:55.252 [2024-12-03 10:48:25.647632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.252 [2024-12-03 10:48:25.647696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:55.252 [2024-12-03 10:48:25.647718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:55.252 [2024-12-03 10:48:25.647733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.252 [2024-12-03 10:48:25.647810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.252 [2024-12-03 10:48:25.647826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:55.252 [2024-12-03 10:48:25.647840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:55.252 [2024-12-03 10:48:25.647853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.252 [2024-12-03 10:48:25.647885] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:55.252 [2024-12-03 10:48:25.648898] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:55.252 [2024-12-03 10:48:25.648946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.253 [2024-12-03 10:48:25.648960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:55.253 [2024-12-03 10:48:25.648974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:20:55.253 [2024-12-03 10:48:25.648987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.253 [2024-12-03 10:48:25.650900] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:55.253 [2024-12-03 10:48:25.665494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.253 [2024-12-03 10:48:25.665549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:55.253 [2024-12-03 10:48:25.665570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.597 ms 00:20:55.253 [2024-12-03 10:48:25.665582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.253 [2024-12-03 10:48:25.665688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.253 [2024-12-03 10:48:25.665705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:55.253 [2024-12-03 10:48:25.665719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:55.253 [2024-12-03 10:48:25.665731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.253 [2024-12-03 10:48:25.674340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.253 [2024-12-03 10:48:25.674385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:55.253 [2024-12-03 10:48:25.674400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.500 ms 00:20:55.253 [2024-12-03 10:48:25.674411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.253 [2024-12-03 10:48:25.674551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.253 [2024-12-03 10:48:25.674567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:55.253 [2024-12-03 10:48:25.674583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:55.253 [2024-12-03 10:48:25.674598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.253 [2024-12-03 10:48:25.674664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.253 [2024-12-03 10:48:25.674679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:55.253 [2024-12-03 10:48:25.674692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:55.253 [2024-12-03 10:48:25.674705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.253 [2024-12-03 10:48:25.674750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:55.253 [2024-12-03 10:48:25.679270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.253 [2024-12-03 10:48:25.679345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:55.253 [2024-12-03 10:48:25.679361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.538 ms 00:20:55.253 [2024-12-03 10:48:25.679373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.253 [2024-12-03 10:48:25.679432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.253 [2024-12-03 10:48:25.679446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:55.253 [2024-12-03 10:48:25.679465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:55.253 [2024-12-03 10:48:25.679476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.253 [2024-12-03 10:48:25.679552] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:55.253 [2024-12-03 10:48:25.679586] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:55.253 [2024-12-03 10:48:25.679643] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:55.253 [2024-12-03 10:48:25.679667] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:55.253 [2024-12-03 10:48:25.679772] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:55.253 [2024-12-03 10:48:25.679795] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:55.253 [2024-12-03 10:48:25.679811] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:55.253 [2024-12-03 10:48:25.679830] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:55.253 [2024-12-03 10:48:25.679845] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:55.253 [2024-12-03 10:48:25.679859] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:55.253 [2024-12-03 10:48:25.679872] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:55.253 [2024-12-03 10:48:25.679885] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:55.253 [2024-12-03 10:48:25.679896] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:55.253 [2024-12-03 10:48:25.679911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.253 [2024-12-03 10:48:25.679923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:55.253 [2024-12-03 10:48:25.679936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:20:55.253 [2024-12-03 10:48:25.679952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.253 [2024-12-03 10:48:25.680075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.253 [2024-12-03 10:48:25.680092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:55.253 [2024-12-03 10:48:25.680105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:55.253 [2024-12-03 10:48:25.680119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.253 [2024-12-03 10:48:25.680224] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:55.253 [2024-12-03 10:48:25.680241] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:55.253 [2024-12-03 10:48:25.680254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.253 [2024-12-03 10:48:25.680267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.253 [2024-12-03 10:48:25.680284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:55.253 [2024-12-03 10:48:25.680296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:55.253 [2024-12-03 10:48:25.680307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:55.253 [2024-12-03 10:48:25.680319] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:55.253 [2024-12-03 10:48:25.680331] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:55.253 [2024-12-03 10:48:25.680343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.253 [2024-12-03 10:48:25.680354] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:55.253 [2024-12-03 10:48:25.680366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:55.253 [2024-12-03 10:48:25.680377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.253 [2024-12-03 10:48:25.680388] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:55.253 [2024-12-03 10:48:25.680399] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:55.253 [2024-12-03 10:48:25.680412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.253 [2024-12-03 10:48:25.680432] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:55.253 [2024-12-03 10:48:25.680444] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:55.253 [2024-12-03 10:48:25.680455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.253 [2024-12-03 10:48:25.680466] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:55.253 [2024-12-03 10:48:25.680478] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:55.253 [2024-12-03 10:48:25.680490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:55.253 [2024-12-03 10:48:25.680502] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:55.253 [2024-12-03 10:48:25.680513] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:55.253 [2024-12-03 10:48:25.680526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:55.253 [2024-12-03 10:48:25.680539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:55.253 [2024-12-03 10:48:25.680551] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:55.253 [2024-12-03 10:48:25.680563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:55.253 [2024-12-03 10:48:25.680575] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:55.253 [2024-12-03 10:48:25.680587] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:55.253 [2024-12-03 10:48:25.680599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:55.253 [2024-12-03 10:48:25.680611] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:55.253 [2024-12-03 10:48:25.680622] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:55.253 [2024-12-03 10:48:25.680634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:55.253 [2024-12-03 10:48:25.680646] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:55.253 [2024-12-03 10:48:25.680657] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:55.253 [2024-12-03 10:48:25.680669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.253 [2024-12-03 10:48:25.680682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:55.253 [2024-12-03 10:48:25.680694] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:55.253 [2024-12-03 10:48:25.680705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.253 [2024-12-03 10:48:25.680717] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:55.253 [2024-12-03 10:48:25.680730] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:55.253 [2024-12-03 10:48:25.680742] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.253 [2024-12-03 10:48:25.680756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.253 [2024-12-03 10:48:25.680769] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:55.253 [2024-12-03 10:48:25.680781] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:55.253 [2024-12-03 10:48:25.680793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:55.253 [2024-12-03 10:48:25.680804] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:55.253 [2024-12-03 10:48:25.680816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:55.254 [2024-12-03 10:48:25.680828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:55.254 [2024-12-03 10:48:25.680843] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:55.254 [2024-12-03 10:48:25.680858] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.254 [2024-12-03 10:48:25.680873] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:55.254 [2024-12-03 10:48:25.680886] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:55.254 [2024-12-03 10:48:25.680899] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:55.254 [2024-12-03 10:48:25.680912] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:55.254 [2024-12-03 10:48:25.680927] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:55.254 [2024-12-03 10:48:25.680940] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:55.254 [2024-12-03 10:48:25.680953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:55.254 [2024-12-03 10:48:25.680967] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:55.254 [2024-12-03 10:48:25.680980] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:55.254 [2024-12-03 10:48:25.680992] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:55.254 [2024-12-03 10:48:25.681005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:55.254 [2024-12-03 10:48:25.681018] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:55.254 [2024-12-03 10:48:25.681032] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:55.254 [2024-12-03 10:48:25.681044] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:55.254 [2024-12-03 10:48:25.681071] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.254 [2024-12-03 10:48:25.681086] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:55.254 [2024-12-03 10:48:25.681100] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:55.254 [2024-12-03 10:48:25.681112] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:55.254 [2024-12-03 10:48:25.681126] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:55.254 [2024-12-03 10:48:25.681140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.254 [2024-12-03 10:48:25.681153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:55.254 [2024-12-03 10:48:25.681167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:20:55.254 [2024-12-03 10:48:25.681184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.254 [2024-12-03 10:48:25.701199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.254 [2024-12-03 10:48:25.701248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.254 [2024-12-03 10:48:25.701265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.955 ms 00:20:55.254 [2024-12-03 10:48:25.701285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.254 [2024-12-03 10:48:25.701404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.254 [2024-12-03 10:48:25.701418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:55.254 [2024-12-03 10:48:25.701432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:20:55.254 [2024-12-03 10:48:25.701446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.254 [2024-12-03 10:48:25.749411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.254 [2024-12-03 10:48:25.749466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.254 [2024-12-03 10:48:25.749483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.891 ms 00:20:55.254 [2024-12-03 10:48:25.749496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.254 [2024-12-03 10:48:25.749560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.254 [2024-12-03 10:48:25.749576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:55.254 [2024-12-03 10:48:25.749591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:55.254 [2024-12-03 10:48:25.749602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.254 [2024-12-03 10:48:25.750314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.254 [2024-12-03 10:48:25.750370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:55.254 [2024-12-03 10:48:25.750394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:20:55.254 [2024-12-03 10:48:25.750407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.254 [2024-12-03 10:48:25.750598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.254 [2024-12-03 10:48:25.750621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:55.254 [2024-12-03 10:48:25.750635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:20:55.254 [2024-12-03 10:48:25.750647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.254 [2024-12-03 10:48:25.768282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.254 [2024-12-03 10:48:25.768327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:55.254 [2024-12-03 10:48:25.768344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.598 ms 00:20:55.254 [2024-12-03 10:48:25.768356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.254 [2024-12-03 10:48:25.783129] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:55.254 [2024-12-03 10:48:25.783188] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:55.254 [2024-12-03 10:48:25.783208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.254 [2024-12-03 10:48:25.783221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:55.254 [2024-12-03 10:48:25.783237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.699 ms 00:20:55.254 [2024-12-03 10:48:25.783248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.254 [2024-12-03 10:48:25.812111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.254 [2024-12-03 10:48:25.812164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:55.254 [2024-12-03 10:48:25.812182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.788 ms 00:20:55.254 [2024-12-03 10:48:25.812194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.254 [2024-12-03 10:48:25.825575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.254 [2024-12-03 10:48:25.825641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:55.254 [2024-12-03 10:48:25.825659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.321 ms 00:20:55.254 [2024-12-03 10:48:25.825670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.254 [2024-12-03 10:48:25.838766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.254 [2024-12-03 10:48:25.838817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:55.254 [2024-12-03 10:48:25.838848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.022 ms 00:20:55.254 [2024-12-03 10:48:25.838860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.254 [2024-12-03 10:48:25.839416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.254 [2024-12-03 10:48:25.839458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:55.254 [2024-12-03 10:48:25.839474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:20:55.254 [2024-12-03 10:48:25.839487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.515 [2024-12-03 10:48:25.907855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.515 [2024-12-03 10:48:25.907926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:55.515 [2024-12-03 10:48:25.907949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.330 ms 00:20:55.515 [2024-12-03 10:48:25.907962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.515 [2024-12-03 10:48:25.919838] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:55.515 [2024-12-03 10:48:25.923310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.515 [2024-12-03 10:48:25.923358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:55.515 [2024-12-03 10:48:25.923384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.271 ms 00:20:55.515 [2024-12-03 10:48:25.923396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.515 [2024-12-03 10:48:25.923506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.515 [2024-12-03 10:48:25.923523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:55.515 [2024-12-03 10:48:25.923539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:55.515 [2024-12-03 10:48:25.923553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.515 [2024-12-03 10:48:25.925153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.515 [2024-12-03 10:48:25.925203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:55.515 [2024-12-03 10:48:25.925219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:20:55.515 [2024-12-03 10:48:25.925240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.515 [2024-12-03 10:48:25.926731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.515 [2024-12-03 10:48:25.926785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:55.515 [2024-12-03 10:48:25.926801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:20:55.515 [2024-12-03 10:48:25.926813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.515 [2024-12-03 10:48:25.926867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.515 [2024-12-03 10:48:25.926890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:55.515 [2024-12-03 10:48:25.926903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:55.515 [2024-12-03 10:48:25.926915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.515 [2024-12-03 10:48:25.926971] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:55.515 [2024-12-03 10:48:25.926991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.515 [2024-12-03 10:48:25.927005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:55.515 [2024-12-03 10:48:25.927018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:55.515 [2024-12-03 10:48:25.927031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.515 [2024-12-03 10:48:25.953760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.515 [2024-12-03 10:48:25.953825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:55.515 [2024-12-03 10:48:25.953846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.681 ms 00:20:55.515 [2024-12-03 10:48:25.953864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.515 [2024-12-03 10:48:25.953988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.515 [2024-12-03 10:48:25.954006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:55.515 [2024-12-03 10:48:25.954021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:55.515 [2024-12-03 10:48:25.954032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.515 [2024-12-03 10:48:25.960541] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 310.057 ms, result 0 00:20:56.902  [2024-12-03T10:48:28.461Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-03T10:48:29.502Z] Copying: 28/1024 [MB] (14 MBps) [2024-12-03T10:48:30.445Z] Copying: 45/1024 [MB] (17 MBps) [2024-12-03T10:48:31.389Z] Copying: 62/1024 [MB] (17 MBps) [2024-12-03T10:48:32.335Z] Copying: 76/1024 [MB] (13 MBps) [2024-12-03T10:48:33.282Z] Copying: 96/1024 [MB] (19 MBps) [2024-12-03T10:48:34.228Z] Copying: 113/1024 [MB] (17 MBps) [2024-12-03T10:48:35.172Z] Copying: 133/1024 [MB] (19 MBps) [2024-12-03T10:48:36.557Z] Copying: 148/1024 [MB] (14 MBps) [2024-12-03T10:48:37.503Z] Copying: 165/1024 [MB] (17 MBps) [2024-12-03T10:48:38.450Z] Copying: 180/1024 [MB] (15 MBps) [2024-12-03T10:48:39.393Z] Copying: 195/1024 [MB] (14 MBps) [2024-12-03T10:48:40.339Z] Copying: 212/1024 [MB] (17 MBps) [2024-12-03T10:48:41.284Z] Copying: 225/1024 [MB] (13 MBps) [2024-12-03T10:48:42.228Z] Copying: 236/1024 [MB] (10 MBps) [2024-12-03T10:48:43.173Z] Copying: 246/1024 [MB] (10 MBps) [2024-12-03T10:48:44.591Z] Copying: 257/1024 [MB] (10 MBps) [2024-12-03T10:48:45.162Z] Copying: 267/1024 [MB] (10 MBps) [2024-12-03T10:48:46.543Z] Copying: 277/1024 [MB] (10 MBps) [2024-12-03T10:48:47.484Z] Copying: 295/1024 [MB] (17 MBps) [2024-12-03T10:48:48.427Z] Copying: 307/1024 [MB] (11 MBps) [2024-12-03T10:48:49.372Z] Copying: 317/1024 [MB] (10 MBps) [2024-12-03T10:48:50.316Z] Copying: 328/1024 [MB] (10 MBps) [2024-12-03T10:48:51.260Z] Copying: 338/1024 [MB] (10 MBps) [2024-12-03T10:48:52.206Z] Copying: 356/1024 [MB] (17 MBps) [2024-12-03T10:48:53.593Z] Copying: 366/1024 [MB] (10 MBps) [2024-12-03T10:48:54.164Z] Copying: 391/1024 [MB] (24 MBps) [2024-12-03T10:48:55.553Z] Copying: 406/1024 [MB] (15 MBps) [2024-12-03T10:48:56.498Z] Copying: 426/1024 [MB] (19 MBps) [2024-12-03T10:48:57.441Z] Copying: 444/1024 [MB] (18 MBps) [2024-12-03T10:48:58.383Z] Copying: 463/1024 [MB] (19 MBps) [2024-12-03T10:48:59.326Z] Copying: 484/1024 [MB] (20 MBps) [2024-12-03T10:49:00.268Z] Copying: 495/1024 [MB] (11 MBps) [2024-12-03T10:49:01.291Z] Copying: 506/1024 [MB] (11 MBps) [2024-12-03T10:49:02.235Z] Copying: 517/1024 [MB] (11 MBps) [2024-12-03T10:49:03.181Z] Copying: 534/1024 [MB] (17 MBps) [2024-12-03T10:49:04.569Z] Copying: 549/1024 [MB] (14 MBps) [2024-12-03T10:49:05.520Z] Copying: 563/1024 [MB] (13 MBps) [2024-12-03T10:49:06.460Z] Copying: 577/1024 [MB] (14 MBps) [2024-12-03T10:49:07.405Z] Copying: 591/1024 [MB] (13 MBps) [2024-12-03T10:49:08.352Z] Copying: 603/1024 [MB] (12 MBps) [2024-12-03T10:49:09.297Z] Copying: 615/1024 [MB] (11 MBps) [2024-12-03T10:49:10.240Z] Copying: 627/1024 [MB] (12 MBps) [2024-12-03T10:49:11.184Z] Copying: 638/1024 [MB] (11 MBps) [2024-12-03T10:49:12.573Z] Copying: 649/1024 [MB] (10 MBps) [2024-12-03T10:49:13.518Z] Copying: 661/1024 [MB] (12 MBps) [2024-12-03T10:49:14.461Z] Copying: 675/1024 [MB] (13 MBps) [2024-12-03T10:49:15.405Z] Copying: 703/1024 [MB] (27 MBps) [2024-12-03T10:49:16.347Z] Copying: 714/1024 [MB] (11 MBps) [2024-12-03T10:49:17.292Z] Copying: 730/1024 [MB] (15 MBps) [2024-12-03T10:49:18.236Z] Copying: 744/1024 [MB] (14 MBps) [2024-12-03T10:49:19.177Z] Copying: 760/1024 [MB] (15 MBps) [2024-12-03T10:49:20.551Z] Copying: 788/1024 [MB] (28 MBps) [2024-12-03T10:49:21.486Z] Copying: 828/1024 [MB] (39 MBps) [2024-12-03T10:49:22.422Z] Copying: 868/1024 [MB] (40 MBps) [2024-12-03T10:49:23.358Z] Copying: 907/1024 [MB] (39 MBps) [2024-12-03T10:49:24.293Z] Copying: 945/1024 [MB] (37 MBps) [2024-12-03T10:49:25.228Z] Copying: 984/1024 [MB] (38 MBps) [2024-12-03T10:49:25.485Z] Copying: 1019/1024 [MB] (35 MBps) [2024-12-03T10:49:25.744Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-03 10:49:25.554347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.131 [2024-12-03 10:49:25.554453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:55.131 [2024-12-03 10:49:25.554478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:55.131 [2024-12-03 10:49:25.554495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.131 [2024-12-03 10:49:25.554535] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:55.131 [2024-12-03 10:49:25.559595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.131 [2024-12-03 10:49:25.559647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:55.131 [2024-12-03 10:49:25.559664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.034 ms 00:21:55.131 [2024-12-03 10:49:25.559679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.131 [2024-12-03 10:49:25.562854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.131 [2024-12-03 10:49:25.562903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:55.131 [2024-12-03 10:49:25.562921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.140 ms 00:21:55.131 [2024-12-03 10:49:25.562936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.131 [2024-12-03 10:49:25.570462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.131 [2024-12-03 10:49:25.570498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:55.131 [2024-12-03 10:49:25.570507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.499 ms 00:21:55.131 [2024-12-03 10:49:25.570515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.131 [2024-12-03 10:49:25.576641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.131 [2024-12-03 10:49:25.576673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:55.131 [2024-12-03 10:49:25.576686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.094 ms 00:21:55.131 [2024-12-03 10:49:25.576694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.131 [2024-12-03 10:49:25.600625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.131 [2024-12-03 10:49:25.600658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:55.131 [2024-12-03 10:49:25.600668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.891 ms 00:21:55.131 [2024-12-03 10:49:25.600675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.131 [2024-12-03 10:49:25.614634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.131 [2024-12-03 10:49:25.614667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:55.131 [2024-12-03 10:49:25.614678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.926 ms 00:21:55.131 [2024-12-03 10:49:25.614686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.131 [2024-12-03 10:49:25.712964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.131 [2024-12-03 10:49:25.713010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:55.131 [2024-12-03 10:49:25.713022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.243 ms 00:21:55.131 [2024-12-03 10:49:25.713030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.131 [2024-12-03 10:49:25.737120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.131 [2024-12-03 10:49:25.737155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:55.131 [2024-12-03 10:49:25.737166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.070 ms 00:21:55.131 [2024-12-03 10:49:25.737173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.391 [2024-12-03 10:49:25.760867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.391 [2024-12-03 10:49:25.760899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:55.391 [2024-12-03 10:49:25.760909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.665 ms 00:21:55.391 [2024-12-03 10:49:25.760923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.391 [2024-12-03 10:49:25.783745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.391 [2024-12-03 10:49:25.783777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:55.391 [2024-12-03 10:49:25.783787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.792 ms 00:21:55.391 [2024-12-03 10:49:25.783793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.391 [2024-12-03 10:49:25.806752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.391 [2024-12-03 10:49:25.806782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:55.391 [2024-12-03 10:49:25.806792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.898 ms 00:21:55.391 [2024-12-03 10:49:25.806798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.391 [2024-12-03 10:49:25.806827] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:55.391 [2024-12-03 10:49:25.806840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:21:55.391 [2024-12-03 10:49:25.806849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.806997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:55.391 [2024-12-03 10:49:25.807292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:55.392 [2024-12-03 10:49:25.807599] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:55.392 [2024-12-03 10:49:25.807606] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9da631bc-1ae5-44d2-a921-b83fcffa847e 00:21:55.392 [2024-12-03 10:49:25.807613] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:21:55.392 [2024-12-03 10:49:25.807620] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 46528 00:21:55.392 [2024-12-03 10:49:25.807630] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 45568 00:21:55.392 [2024-12-03 10:49:25.807638] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0211 00:21:55.392 [2024-12-03 10:49:25.807644] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:55.392 [2024-12-03 10:49:25.807651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:55.392 [2024-12-03 10:49:25.807659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:55.392 [2024-12-03 10:49:25.807666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:55.392 [2024-12-03 10:49:25.807677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:55.392 [2024-12-03 10:49:25.807683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.392 [2024-12-03 10:49:25.807690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:55.392 [2024-12-03 10:49:25.807699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:21:55.392 [2024-12-03 10:49:25.807706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.392 [2024-12-03 10:49:25.820171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.392 [2024-12-03 10:49:25.820204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:55.392 [2024-12-03 10:49:25.820212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.441 ms 00:21:55.392 [2024-12-03 10:49:25.820219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.392 [2024-12-03 10:49:25.820404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.392 [2024-12-03 10:49:25.820413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:55.392 [2024-12-03 10:49:25.820420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:21:55.392 [2024-12-03 10:49:25.820427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.392 [2024-12-03 10:49:25.855475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.392 [2024-12-03 10:49:25.855510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:55.392 [2024-12-03 10:49:25.855520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.392 [2024-12-03 10:49:25.855528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.392 [2024-12-03 10:49:25.855578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.392 [2024-12-03 10:49:25.855586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:55.392 [2024-12-03 10:49:25.855594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.392 [2024-12-03 10:49:25.855601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.392 [2024-12-03 10:49:25.855665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.392 [2024-12-03 10:49:25.855674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:55.392 [2024-12-03 10:49:25.855682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.392 [2024-12-03 10:49:25.855689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.392 [2024-12-03 10:49:25.855702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.392 [2024-12-03 10:49:25.855710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:55.392 [2024-12-03 10:49:25.855717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.392 [2024-12-03 10:49:25.855724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.392 [2024-12-03 10:49:25.929358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.392 [2024-12-03 10:49:25.929398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:55.392 [2024-12-03 10:49:25.929408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.392 [2024-12-03 10:49:25.929416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.392 [2024-12-03 10:49:25.958703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.392 [2024-12-03 10:49:25.958735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:55.392 [2024-12-03 10:49:25.958744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.392 [2024-12-03 10:49:25.958751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.392 [2024-12-03 10:49:25.958801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.392 [2024-12-03 10:49:25.958814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.392 [2024-12-03 10:49:25.958822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.392 [2024-12-03 10:49:25.958829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.392 [2024-12-03 10:49:25.958866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.392 [2024-12-03 10:49:25.958874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.392 [2024-12-03 10:49:25.958882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.392 [2024-12-03 10:49:25.958889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.392 [2024-12-03 10:49:25.958973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.392 [2024-12-03 10:49:25.958982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.393 [2024-12-03 10:49:25.958992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.393 [2024-12-03 10:49:25.958999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.393 [2024-12-03 10:49:25.959027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.393 [2024-12-03 10:49:25.959035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:55.393 [2024-12-03 10:49:25.959042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.393 [2024-12-03 10:49:25.959050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.393 [2024-12-03 10:49:25.959098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.393 [2024-12-03 10:49:25.959107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.393 [2024-12-03 10:49:25.959117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.393 [2024-12-03 10:49:25.959124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.393 [2024-12-03 10:49:25.959163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.393 [2024-12-03 10:49:25.959172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.393 [2024-12-03 10:49:25.959180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.393 [2024-12-03 10:49:25.959187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.393 [2024-12-03 10:49:25.959308] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 404.941 ms, result 0 00:21:56.328 00:21:56.328 00:21:56.328 10:49:26 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:58.230 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:58.230 10:49:28 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:58.230 10:49:28 -- ftl/restore.sh@85 -- # restore_kill 00:21:58.230 10:49:28 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:58.230 10:49:28 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:58.230 10:49:28 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:58.230 10:49:28 -- ftl/restore.sh@32 -- # killprocess 73048 00:21:58.230 10:49:28 -- common/autotest_common.sh@936 -- # '[' -z 73048 ']' 00:21:58.230 10:49:28 -- common/autotest_common.sh@940 -- # kill -0 73048 00:21:58.230 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (73048) - No such process 00:21:58.230 Process with pid 73048 is not found 00:21:58.230 10:49:28 -- common/autotest_common.sh@963 -- # echo 'Process with pid 73048 is not found' 00:21:58.230 Remove shared memory files 00:21:58.230 10:49:28 -- ftl/restore.sh@33 -- # remove_shm 00:21:58.230 10:49:28 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:58.230 10:49:28 -- ftl/common.sh@205 -- # rm -f rm -f 00:21:58.230 10:49:28 -- ftl/common.sh@206 -- # rm -f rm -f 00:21:58.230 10:49:28 -- ftl/common.sh@207 -- # rm -f rm -f 00:21:58.230 10:49:28 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:58.230 10:49:28 -- ftl/common.sh@209 -- # rm -f rm -f 00:21:58.230 00:21:58.230 real 4m48.635s 00:21:58.230 user 4m34.949s 00:21:58.230 sys 0m12.870s 00:21:58.230 10:49:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:58.230 10:49:28 -- common/autotest_common.sh@10 -- # set +x 00:21:58.230 ************************************ 00:21:58.230 END TEST ftl_restore 00:21:58.230 ************************************ 00:21:58.230 10:49:28 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:21:58.230 10:49:28 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:58.230 10:49:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:58.230 10:49:28 -- common/autotest_common.sh@10 -- # set +x 00:21:58.230 ************************************ 00:21:58.230 START TEST ftl_dirty_shutdown 00:21:58.230 ************************************ 00:21:58.230 10:49:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:21:58.230 * Looking for test storage... 00:21:58.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.230 10:49:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:58.230 10:49:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:58.230 10:49:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:58.230 10:49:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:58.230 10:49:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:58.230 10:49:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:58.230 10:49:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:58.230 10:49:28 -- scripts/common.sh@335 -- # IFS=.-: 00:21:58.230 10:49:28 -- scripts/common.sh@335 -- # read -ra ver1 00:21:58.230 10:49:28 -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.230 10:49:28 -- scripts/common.sh@336 -- # read -ra ver2 00:21:58.230 10:49:28 -- scripts/common.sh@337 -- # local 'op=<' 00:21:58.230 10:49:28 -- scripts/common.sh@339 -- # ver1_l=2 00:21:58.230 10:49:28 -- scripts/common.sh@340 -- # ver2_l=1 00:21:58.230 10:49:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:58.230 10:49:28 -- scripts/common.sh@343 -- # case "$op" in 00:21:58.230 10:49:28 -- scripts/common.sh@344 -- # : 1 00:21:58.230 10:49:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:58.230 10:49:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.230 10:49:28 -- scripts/common.sh@364 -- # decimal 1 00:21:58.230 10:49:28 -- scripts/common.sh@352 -- # local d=1 00:21:58.230 10:49:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.230 10:49:28 -- scripts/common.sh@354 -- # echo 1 00:21:58.230 10:49:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:58.230 10:49:28 -- scripts/common.sh@365 -- # decimal 2 00:21:58.230 10:49:28 -- scripts/common.sh@352 -- # local d=2 00:21:58.230 10:49:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.230 10:49:28 -- scripts/common.sh@354 -- # echo 2 00:21:58.230 10:49:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:58.230 10:49:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:58.230 10:49:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:58.230 10:49:28 -- scripts/common.sh@367 -- # return 0 00:21:58.230 10:49:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.230 10:49:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:58.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.230 --rc genhtml_branch_coverage=1 00:21:58.230 --rc genhtml_function_coverage=1 00:21:58.230 --rc genhtml_legend=1 00:21:58.230 --rc geninfo_all_blocks=1 00:21:58.230 --rc geninfo_unexecuted_blocks=1 00:21:58.230 00:21:58.230 ' 00:21:58.230 10:49:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:58.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.230 --rc genhtml_branch_coverage=1 00:21:58.230 --rc genhtml_function_coverage=1 00:21:58.230 --rc genhtml_legend=1 00:21:58.230 --rc geninfo_all_blocks=1 00:21:58.230 --rc geninfo_unexecuted_blocks=1 00:21:58.230 00:21:58.230 ' 00:21:58.230 10:49:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:58.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.230 --rc genhtml_branch_coverage=1 00:21:58.230 --rc genhtml_function_coverage=1 00:21:58.230 --rc genhtml_legend=1 00:21:58.230 --rc geninfo_all_blocks=1 00:21:58.230 --rc geninfo_unexecuted_blocks=1 00:21:58.230 00:21:58.230 ' 00:21:58.230 10:49:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:58.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.230 --rc genhtml_branch_coverage=1 00:21:58.230 --rc genhtml_function_coverage=1 00:21:58.231 --rc genhtml_legend=1 00:21:58.231 --rc geninfo_all_blocks=1 00:21:58.231 --rc geninfo_unexecuted_blocks=1 00:21:58.231 00:21:58.231 ' 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:58.231 10:49:28 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:58.231 10:49:28 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.231 10:49:28 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.231 10:49:28 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:58.231 10:49:28 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:58.231 10:49:28 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.231 10:49:28 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:58.231 10:49:28 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:58.231 10:49:28 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.231 10:49:28 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.231 10:49:28 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:58.231 10:49:28 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:58.231 10:49:28 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:58.231 10:49:28 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:58.231 10:49:28 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:58.231 10:49:28 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:58.231 10:49:28 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.231 10:49:28 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.231 10:49:28 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:58.231 10:49:28 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:58.231 10:49:28 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:58.231 10:49:28 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:58.231 10:49:28 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:58.231 10:49:28 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:58.231 10:49:28 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:58.231 10:49:28 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:58.231 10:49:28 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:58.231 10:49:28 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@45 -- # svcpid=76123 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76123 00:21:58.231 10:49:28 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:58.231 10:49:28 -- common/autotest_common.sh@829 -- # '[' -z 76123 ']' 00:21:58.231 10:49:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.231 10:49:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.231 10:49:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.231 10:49:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.231 10:49:28 -- common/autotest_common.sh@10 -- # set +x 00:21:58.231 [2024-12-03 10:49:28.813675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:58.231 [2024-12-03 10:49:28.813784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76123 ] 00:21:58.491 [2024-12-03 10:49:28.958141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.750 [2024-12-03 10:49:29.175389] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:58.750 [2024-12-03 10:49:29.175607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.131 10:49:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.131 10:49:30 -- common/autotest_common.sh@862 -- # return 0 00:22:00.131 10:49:30 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:22:00.131 10:49:30 -- ftl/common.sh@54 -- # local name=nvme0 00:22:00.131 10:49:30 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:22:00.131 10:49:30 -- ftl/common.sh@56 -- # local size=103424 00:22:00.131 10:49:30 -- ftl/common.sh@59 -- # local base_bdev 00:22:00.131 10:49:30 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:22:00.131 10:49:30 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:00.131 10:49:30 -- ftl/common.sh@62 -- # local base_size 00:22:00.131 10:49:30 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:00.131 10:49:30 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:22:00.131 10:49:30 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:00.131 10:49:30 -- common/autotest_common.sh@1369 -- # local bs 00:22:00.131 10:49:30 -- common/autotest_common.sh@1370 -- # local nb 00:22:00.131 10:49:30 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:00.392 10:49:30 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:00.392 { 00:22:00.392 "name": "nvme0n1", 00:22:00.392 "aliases": [ 00:22:00.392 "44e330ef-d854-4009-81fb-127556064c97" 00:22:00.392 ], 00:22:00.392 "product_name": "NVMe disk", 00:22:00.392 "block_size": 4096, 00:22:00.392 "num_blocks": 1310720, 00:22:00.392 "uuid": "44e330ef-d854-4009-81fb-127556064c97", 00:22:00.392 "assigned_rate_limits": { 00:22:00.392 "rw_ios_per_sec": 0, 00:22:00.392 "rw_mbytes_per_sec": 0, 00:22:00.392 "r_mbytes_per_sec": 0, 00:22:00.392 "w_mbytes_per_sec": 0 00:22:00.392 }, 00:22:00.392 "claimed": true, 00:22:00.392 "claim_type": "read_many_write_one", 00:22:00.392 "zoned": false, 00:22:00.392 "supported_io_types": { 00:22:00.392 "read": true, 00:22:00.392 "write": true, 00:22:00.392 "unmap": true, 00:22:00.392 "write_zeroes": true, 00:22:00.392 "flush": true, 00:22:00.392 "reset": true, 00:22:00.392 "compare": true, 00:22:00.392 "compare_and_write": false, 00:22:00.392 "abort": true, 00:22:00.392 "nvme_admin": true, 00:22:00.392 "nvme_io": true 00:22:00.392 }, 00:22:00.392 "driver_specific": { 00:22:00.392 "nvme": [ 00:22:00.392 { 00:22:00.392 "pci_address": "0000:00:07.0", 00:22:00.392 "trid": { 00:22:00.392 "trtype": "PCIe", 00:22:00.392 "traddr": "0000:00:07.0" 00:22:00.392 }, 00:22:00.392 "ctrlr_data": { 00:22:00.392 "cntlid": 0, 00:22:00.392 "vendor_id": "0x1b36", 00:22:00.392 "model_number": "QEMU NVMe Ctrl", 00:22:00.392 "serial_number": "12341", 00:22:00.392 "firmware_revision": "8.0.0", 00:22:00.392 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:00.392 "oacs": { 00:22:00.392 "security": 0, 00:22:00.392 "format": 1, 00:22:00.392 "firmware": 0, 00:22:00.392 "ns_manage": 1 00:22:00.392 }, 00:22:00.392 "multi_ctrlr": false, 00:22:00.392 "ana_reporting": false 00:22:00.392 }, 00:22:00.392 "vs": { 00:22:00.392 "nvme_version": "1.4" 00:22:00.392 }, 00:22:00.392 "ns_data": { 00:22:00.392 "id": 1, 00:22:00.392 "can_share": false 00:22:00.392 } 00:22:00.392 } 00:22:00.392 ], 00:22:00.392 "mp_policy": "active_passive" 00:22:00.392 } 00:22:00.392 } 00:22:00.392 ]' 00:22:00.392 10:49:30 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:00.392 10:49:30 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:00.392 10:49:30 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:00.392 10:49:30 -- common/autotest_common.sh@1373 -- # nb=1310720 00:22:00.392 10:49:30 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:22:00.392 10:49:30 -- common/autotest_common.sh@1377 -- # echo 5120 00:22:00.392 10:49:30 -- ftl/common.sh@63 -- # base_size=5120 00:22:00.392 10:49:30 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:00.392 10:49:30 -- ftl/common.sh@67 -- # clear_lvols 00:22:00.392 10:49:30 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:00.392 10:49:30 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:00.652 10:49:31 -- ftl/common.sh@28 -- # stores=0969ec1a-327f-43ef-9eba-bfd861c3e4ed 00:22:00.652 10:49:31 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:00.652 10:49:31 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0969ec1a-327f-43ef-9eba-bfd861c3e4ed 00:22:00.912 10:49:31 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:00.912 10:49:31 -- ftl/common.sh@68 -- # lvs=eeef358e-a64f-4b71-a34f-24b470a9def2 00:22:00.912 10:49:31 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u eeef358e-a64f-4b71-a34f-24b470a9def2 00:22:01.224 10:49:31 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=966e4538-079a-4cdb-8831-78ca03b22a87 00:22:01.224 10:49:31 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:22:01.224 10:49:31 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 966e4538-079a-4cdb-8831-78ca03b22a87 00:22:01.224 10:49:31 -- ftl/common.sh@35 -- # local name=nvc0 00:22:01.224 10:49:31 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:22:01.224 10:49:31 -- ftl/common.sh@37 -- # local base_bdev=966e4538-079a-4cdb-8831-78ca03b22a87 00:22:01.224 10:49:31 -- ftl/common.sh@38 -- # local cache_size= 00:22:01.224 10:49:31 -- ftl/common.sh@41 -- # get_bdev_size 966e4538-079a-4cdb-8831-78ca03b22a87 00:22:01.224 10:49:31 -- common/autotest_common.sh@1367 -- # local bdev_name=966e4538-079a-4cdb-8831-78ca03b22a87 00:22:01.224 10:49:31 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:01.224 10:49:31 -- common/autotest_common.sh@1369 -- # local bs 00:22:01.224 10:49:31 -- common/autotest_common.sh@1370 -- # local nb 00:22:01.224 10:49:31 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 966e4538-079a-4cdb-8831-78ca03b22a87 00:22:01.494 10:49:31 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:01.494 { 00:22:01.494 "name": "966e4538-079a-4cdb-8831-78ca03b22a87", 00:22:01.494 "aliases": [ 00:22:01.494 "lvs/nvme0n1p0" 00:22:01.494 ], 00:22:01.494 "product_name": "Logical Volume", 00:22:01.494 "block_size": 4096, 00:22:01.494 "num_blocks": 26476544, 00:22:01.494 "uuid": "966e4538-079a-4cdb-8831-78ca03b22a87", 00:22:01.494 "assigned_rate_limits": { 00:22:01.494 "rw_ios_per_sec": 0, 00:22:01.494 "rw_mbytes_per_sec": 0, 00:22:01.494 "r_mbytes_per_sec": 0, 00:22:01.494 "w_mbytes_per_sec": 0 00:22:01.494 }, 00:22:01.494 "claimed": false, 00:22:01.494 "zoned": false, 00:22:01.494 "supported_io_types": { 00:22:01.494 "read": true, 00:22:01.494 "write": true, 00:22:01.494 "unmap": true, 00:22:01.494 "write_zeroes": true, 00:22:01.494 "flush": false, 00:22:01.494 "reset": true, 00:22:01.494 "compare": false, 00:22:01.494 "compare_and_write": false, 00:22:01.494 "abort": false, 00:22:01.494 "nvme_admin": false, 00:22:01.494 "nvme_io": false 00:22:01.494 }, 00:22:01.494 "driver_specific": { 00:22:01.494 "lvol": { 00:22:01.494 "lvol_store_uuid": "eeef358e-a64f-4b71-a34f-24b470a9def2", 00:22:01.494 "base_bdev": "nvme0n1", 00:22:01.494 "thin_provision": true, 00:22:01.494 "snapshot": false, 00:22:01.494 "clone": false, 00:22:01.494 "esnap_clone": false 00:22:01.494 } 00:22:01.494 } 00:22:01.494 } 00:22:01.494 ]' 00:22:01.494 10:49:31 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:01.494 10:49:31 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:01.494 10:49:31 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:01.494 10:49:31 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:01.494 10:49:31 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:01.494 10:49:31 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:01.494 10:49:31 -- ftl/common.sh@41 -- # local base_size=5171 00:22:01.494 10:49:31 -- ftl/common.sh@44 -- # local nvc_bdev 00:22:01.494 10:49:31 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:22:01.755 10:49:32 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:01.755 10:49:32 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:01.755 10:49:32 -- ftl/common.sh@48 -- # get_bdev_size 966e4538-079a-4cdb-8831-78ca03b22a87 00:22:01.755 10:49:32 -- common/autotest_common.sh@1367 -- # local bdev_name=966e4538-079a-4cdb-8831-78ca03b22a87 00:22:01.755 10:49:32 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:01.755 10:49:32 -- common/autotest_common.sh@1369 -- # local bs 00:22:01.755 10:49:32 -- common/autotest_common.sh@1370 -- # local nb 00:22:01.755 10:49:32 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 966e4538-079a-4cdb-8831-78ca03b22a87 00:22:02.017 10:49:32 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:02.017 { 00:22:02.017 "name": "966e4538-079a-4cdb-8831-78ca03b22a87", 00:22:02.017 "aliases": [ 00:22:02.017 "lvs/nvme0n1p0" 00:22:02.017 ], 00:22:02.017 "product_name": "Logical Volume", 00:22:02.017 "block_size": 4096, 00:22:02.017 "num_blocks": 26476544, 00:22:02.017 "uuid": "966e4538-079a-4cdb-8831-78ca03b22a87", 00:22:02.017 "assigned_rate_limits": { 00:22:02.017 "rw_ios_per_sec": 0, 00:22:02.017 "rw_mbytes_per_sec": 0, 00:22:02.017 "r_mbytes_per_sec": 0, 00:22:02.017 "w_mbytes_per_sec": 0 00:22:02.017 }, 00:22:02.017 "claimed": false, 00:22:02.017 "zoned": false, 00:22:02.017 "supported_io_types": { 00:22:02.017 "read": true, 00:22:02.017 "write": true, 00:22:02.017 "unmap": true, 00:22:02.017 "write_zeroes": true, 00:22:02.017 "flush": false, 00:22:02.017 "reset": true, 00:22:02.017 "compare": false, 00:22:02.017 "compare_and_write": false, 00:22:02.017 "abort": false, 00:22:02.017 "nvme_admin": false, 00:22:02.017 "nvme_io": false 00:22:02.017 }, 00:22:02.017 "driver_specific": { 00:22:02.017 "lvol": { 00:22:02.017 "lvol_store_uuid": "eeef358e-a64f-4b71-a34f-24b470a9def2", 00:22:02.017 "base_bdev": "nvme0n1", 00:22:02.017 "thin_provision": true, 00:22:02.017 "snapshot": false, 00:22:02.017 "clone": false, 00:22:02.017 "esnap_clone": false 00:22:02.017 } 00:22:02.017 } 00:22:02.017 } 00:22:02.017 ]' 00:22:02.017 10:49:32 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:02.017 10:49:32 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:02.017 10:49:32 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:02.017 10:49:32 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:02.017 10:49:32 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:02.017 10:49:32 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:02.017 10:49:32 -- ftl/common.sh@48 -- # cache_size=5171 00:22:02.017 10:49:32 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:02.279 10:49:32 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:02.279 10:49:32 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 966e4538-079a-4cdb-8831-78ca03b22a87 00:22:02.279 10:49:32 -- common/autotest_common.sh@1367 -- # local bdev_name=966e4538-079a-4cdb-8831-78ca03b22a87 00:22:02.279 10:49:32 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:02.279 10:49:32 -- common/autotest_common.sh@1369 -- # local bs 00:22:02.279 10:49:32 -- common/autotest_common.sh@1370 -- # local nb 00:22:02.279 10:49:32 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 966e4538-079a-4cdb-8831-78ca03b22a87 00:22:02.540 10:49:32 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:02.540 { 00:22:02.540 "name": "966e4538-079a-4cdb-8831-78ca03b22a87", 00:22:02.540 "aliases": [ 00:22:02.540 "lvs/nvme0n1p0" 00:22:02.540 ], 00:22:02.540 "product_name": "Logical Volume", 00:22:02.540 "block_size": 4096, 00:22:02.540 "num_blocks": 26476544, 00:22:02.540 "uuid": "966e4538-079a-4cdb-8831-78ca03b22a87", 00:22:02.540 "assigned_rate_limits": { 00:22:02.540 "rw_ios_per_sec": 0, 00:22:02.540 "rw_mbytes_per_sec": 0, 00:22:02.540 "r_mbytes_per_sec": 0, 00:22:02.540 "w_mbytes_per_sec": 0 00:22:02.540 }, 00:22:02.540 "claimed": false, 00:22:02.540 "zoned": false, 00:22:02.540 "supported_io_types": { 00:22:02.540 "read": true, 00:22:02.540 "write": true, 00:22:02.540 "unmap": true, 00:22:02.540 "write_zeroes": true, 00:22:02.540 "flush": false, 00:22:02.540 "reset": true, 00:22:02.540 "compare": false, 00:22:02.540 "compare_and_write": false, 00:22:02.540 "abort": false, 00:22:02.540 "nvme_admin": false, 00:22:02.540 "nvme_io": false 00:22:02.540 }, 00:22:02.540 "driver_specific": { 00:22:02.540 "lvol": { 00:22:02.540 "lvol_store_uuid": "eeef358e-a64f-4b71-a34f-24b470a9def2", 00:22:02.540 "base_bdev": "nvme0n1", 00:22:02.540 "thin_provision": true, 00:22:02.540 "snapshot": false, 00:22:02.540 "clone": false, 00:22:02.540 "esnap_clone": false 00:22:02.540 } 00:22:02.540 } 00:22:02.540 } 00:22:02.540 ]' 00:22:02.541 10:49:32 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:02.541 10:49:32 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:02.541 10:49:32 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:02.541 10:49:32 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:02.541 10:49:32 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:02.541 10:49:32 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:02.541 10:49:32 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:02.541 10:49:32 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 966e4538-079a-4cdb-8831-78ca03b22a87 --l2p_dram_limit 10' 00:22:02.541 10:49:32 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:02.541 10:49:32 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:22:02.541 10:49:32 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:02.541 10:49:32 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 966e4538-079a-4cdb-8831-78ca03b22a87 --l2p_dram_limit 10 -c nvc0n1p0 00:22:02.541 [2024-12-03 10:49:33.144639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.541 [2024-12-03 10:49:33.144674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:02.541 [2024-12-03 10:49:33.144686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:02.541 [2024-12-03 10:49:33.144695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.541 [2024-12-03 10:49:33.144732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.541 [2024-12-03 10:49:33.144743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.541 [2024-12-03 10:49:33.144751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:02.541 [2024-12-03 10:49:33.144756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.541 [2024-12-03 10:49:33.144772] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:02.541 [2024-12-03 10:49:33.145365] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:02.541 [2024-12-03 10:49:33.145380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.541 [2024-12-03 10:49:33.145387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.541 [2024-12-03 10:49:33.145395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:22:02.541 [2024-12-03 10:49:33.145400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.541 [2024-12-03 10:49:33.145451] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f590d54a-bc3c-45d1-a063-a9731ee34255 00:22:02.541 [2024-12-03 10:49:33.146405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.541 [2024-12-03 10:49:33.146422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:02.541 [2024-12-03 10:49:33.146430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:02.541 [2024-12-03 10:49:33.146437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.541 [2024-12-03 10:49:33.151314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.541 [2024-12-03 10:49:33.151338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.541 [2024-12-03 10:49:33.151345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.841 ms 00:22:02.541 [2024-12-03 10:49:33.151352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.541 [2024-12-03 10:49:33.151416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.541 [2024-12-03 10:49:33.151430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.541 [2024-12-03 10:49:33.151436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:02.541 [2024-12-03 10:49:33.151446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.802 [2024-12-03 10:49:33.151479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.803 [2024-12-03 10:49:33.151490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:02.803 [2024-12-03 10:49:33.151496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:02.803 [2024-12-03 10:49:33.151502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.803 [2024-12-03 10:49:33.151520] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.803 [2024-12-03 10:49:33.154489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.803 [2024-12-03 10:49:33.154507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.803 [2024-12-03 10:49:33.154515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.973 ms 00:22:02.803 [2024-12-03 10:49:33.154521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.803 [2024-12-03 10:49:33.154549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.803 [2024-12-03 10:49:33.154555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:02.803 [2024-12-03 10:49:33.154562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:02.803 [2024-12-03 10:49:33.154567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.803 [2024-12-03 10:49:33.154581] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:02.803 [2024-12-03 10:49:33.154665] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:02.803 [2024-12-03 10:49:33.154676] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:02.803 [2024-12-03 10:49:33.154684] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:02.803 [2024-12-03 10:49:33.154693] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:02.803 [2024-12-03 10:49:33.154699] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:02.803 [2024-12-03 10:49:33.154708] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:02.803 [2024-12-03 10:49:33.154719] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:02.803 [2024-12-03 10:49:33.154726] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:02.803 [2024-12-03 10:49:33.154731] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:02.803 [2024-12-03 10:49:33.154737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.803 [2024-12-03 10:49:33.154743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:02.803 [2024-12-03 10:49:33.154750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:22:02.803 [2024-12-03 10:49:33.154755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.803 [2024-12-03 10:49:33.154803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.803 [2024-12-03 10:49:33.154809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:02.803 [2024-12-03 10:49:33.154816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:02.803 [2024-12-03 10:49:33.154823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.803 [2024-12-03 10:49:33.154879] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:02.803 [2024-12-03 10:49:33.154886] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:02.803 [2024-12-03 10:49:33.154894] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.803 [2024-12-03 10:49:33.154899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.803 [2024-12-03 10:49:33.154906] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:02.803 [2024-12-03 10:49:33.154912] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:02.803 [2024-12-03 10:49:33.154918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:02.803 [2024-12-03 10:49:33.154923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:02.803 [2024-12-03 10:49:33.154929] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:02.803 [2024-12-03 10:49:33.154934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.803 [2024-12-03 10:49:33.154940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:02.803 [2024-12-03 10:49:33.154945] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:02.803 [2024-12-03 10:49:33.154952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.803 [2024-12-03 10:49:33.154957] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:02.803 [2024-12-03 10:49:33.154963] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:02.803 [2024-12-03 10:49:33.154968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.803 [2024-12-03 10:49:33.154976] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:02.803 [2024-12-03 10:49:33.154981] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:02.803 [2024-12-03 10:49:33.154986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.803 [2024-12-03 10:49:33.154991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:02.803 [2024-12-03 10:49:33.154997] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:02.803 [2024-12-03 10:49:33.155002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:02.803 [2024-12-03 10:49:33.155009] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:02.803 [2024-12-03 10:49:33.155014] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:02.803 [2024-12-03 10:49:33.155020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:02.803 [2024-12-03 10:49:33.155025] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:02.803 [2024-12-03 10:49:33.155031] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:02.803 [2024-12-03 10:49:33.155035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:02.803 [2024-12-03 10:49:33.155041] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:02.803 [2024-12-03 10:49:33.155048] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:02.803 [2024-12-03 10:49:33.155070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:02.803 [2024-12-03 10:49:33.155075] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:02.803 [2024-12-03 10:49:33.155083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:02.803 [2024-12-03 10:49:33.155088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:02.803 [2024-12-03 10:49:33.155094] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:02.803 [2024-12-03 10:49:33.155099] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:02.803 [2024-12-03 10:49:33.155105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.803 [2024-12-03 10:49:33.155110] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:02.803 [2024-12-03 10:49:33.155118] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:02.803 [2024-12-03 10:49:33.155123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.803 [2024-12-03 10:49:33.155128] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:02.803 [2024-12-03 10:49:33.155134] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:02.803 [2024-12-03 10:49:33.155140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.803 [2024-12-03 10:49:33.155146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.803 [2024-12-03 10:49:33.155155] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:02.803 [2024-12-03 10:49:33.155160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:02.803 [2024-12-03 10:49:33.155166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:02.803 [2024-12-03 10:49:33.155171] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:02.803 [2024-12-03 10:49:33.155179] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:02.803 [2024-12-03 10:49:33.155184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:02.803 [2024-12-03 10:49:33.155192] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:02.803 [2024-12-03 10:49:33.155199] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.803 [2024-12-03 10:49:33.155206] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:02.803 [2024-12-03 10:49:33.155212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:02.803 [2024-12-03 10:49:33.155218] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:02.803 [2024-12-03 10:49:33.155223] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:02.803 [2024-12-03 10:49:33.155230] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:02.803 [2024-12-03 10:49:33.155235] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:02.803 [2024-12-03 10:49:33.155241] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:02.803 [2024-12-03 10:49:33.155247] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:02.803 [2024-12-03 10:49:33.155253] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:02.803 [2024-12-03 10:49:33.155259] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:02.803 [2024-12-03 10:49:33.155265] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:02.803 [2024-12-03 10:49:33.155271] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:02.803 [2024-12-03 10:49:33.155288] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:02.803 [2024-12-03 10:49:33.155293] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:02.803 [2024-12-03 10:49:33.155301] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.804 [2024-12-03 10:49:33.155307] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.804 [2024-12-03 10:49:33.155314] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:02.804 [2024-12-03 10:49:33.155319] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:02.804 [2024-12-03 10:49:33.155326] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:02.804 [2024-12-03 10:49:33.155331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.804 [2024-12-03 10:49:33.155338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:02.804 [2024-12-03 10:49:33.155344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:22:02.804 [2024-12-03 10:49:33.155351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.804 [2024-12-03 10:49:33.167739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.804 [2024-12-03 10:49:33.167764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.804 [2024-12-03 10:49:33.167772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.347 ms 00:22:02.804 [2024-12-03 10:49:33.167779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.804 [2024-12-03 10:49:33.167849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.804 [2024-12-03 10:49:33.167858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:02.804 [2024-12-03 10:49:33.167866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:02.804 [2024-12-03 10:49:33.167872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.804 [2024-12-03 10:49:33.192293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.804 [2024-12-03 10:49:33.192317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:02.804 [2024-12-03 10:49:33.192326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.388 ms 00:22:02.804 [2024-12-03 10:49:33.192333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.804 [2024-12-03 10:49:33.192357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.804 [2024-12-03 10:49:33.192366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:02.804 [2024-12-03 10:49:33.192373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:02.804 [2024-12-03 10:49:33.192381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.804 [2024-12-03 10:49:33.192690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.804 [2024-12-03 10:49:33.192705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:02.804 [2024-12-03 10:49:33.192712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:22:02.804 [2024-12-03 10:49:33.192719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.804 [2024-12-03 10:49:33.192804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.804 [2024-12-03 10:49:33.192813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:02.804 [2024-12-03 10:49:33.192819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:02.804 [2024-12-03 10:49:33.192827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.804 [2024-12-03 10:49:33.204868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.804 [2024-12-03 10:49:33.204893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:02.804 [2024-12-03 10:49:33.204901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.027 ms 00:22:02.804 [2024-12-03 10:49:33.204908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.804 [2024-12-03 10:49:33.213836] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:02.804 [2024-12-03 10:49:33.216222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.804 [2024-12-03 10:49:33.216242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:02.804 [2024-12-03 10:49:33.216253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.258 ms 00:22:02.804 [2024-12-03 10:49:33.216260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.804 [2024-12-03 10:49:33.288832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.804 [2024-12-03 10:49:33.288860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:02.804 [2024-12-03 10:49:33.288870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.549 ms 00:22:02.804 [2024-12-03 10:49:33.288876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.804 [2024-12-03 10:49:33.288909] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:22:02.804 [2024-12-03 10:49:33.288917] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:22:07.011 [2024-12-03 10:49:37.078753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.011 [2024-12-03 10:49:37.078808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:07.011 [2024-12-03 10:49:37.078825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3789.824 ms 00:22:07.011 [2024-12-03 10:49:37.078833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.011 [2024-12-03 10:49:37.079006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.011 [2024-12-03 10:49:37.079018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:07.011 [2024-12-03 10:49:37.079032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:22:07.011 [2024-12-03 10:49:37.079039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.011 [2024-12-03 10:49:37.099275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.011 [2024-12-03 10:49:37.099315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:07.011 [2024-12-03 10:49:37.099327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.178 ms 00:22:07.011 [2024-12-03 10:49:37.099333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.011 [2024-12-03 10:49:37.117616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.011 [2024-12-03 10:49:37.117642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:07.011 [2024-12-03 10:49:37.117655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.245 ms 00:22:07.011 [2024-12-03 10:49:37.117661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.011 [2024-12-03 10:49:37.117920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.011 [2024-12-03 10:49:37.117928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:07.011 [2024-12-03 10:49:37.117937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:22:07.011 [2024-12-03 10:49:37.117943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.011 [2024-12-03 10:49:37.172232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.011 [2024-12-03 10:49:37.172257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:07.011 [2024-12-03 10:49:37.172267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.252 ms 00:22:07.011 [2024-12-03 10:49:37.172273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.011 [2024-12-03 10:49:37.191062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.011 [2024-12-03 10:49:37.191086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:07.011 [2024-12-03 10:49:37.191096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.758 ms 00:22:07.011 [2024-12-03 10:49:37.191102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.011 [2024-12-03 10:49:37.192046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.011 [2024-12-03 10:49:37.192081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:07.011 [2024-12-03 10:49:37.192092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:22:07.011 [2024-12-03 10:49:37.192098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.011 [2024-12-03 10:49:37.210700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.011 [2024-12-03 10:49:37.210722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:07.011 [2024-12-03 10:49:37.210731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.573 ms 00:22:07.011 [2024-12-03 10:49:37.210737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.011 [2024-12-03 10:49:37.210771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.011 [2024-12-03 10:49:37.210779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:07.011 [2024-12-03 10:49:37.210787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:07.011 [2024-12-03 10:49:37.210793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.011 [2024-12-03 10:49:37.210855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.012 [2024-12-03 10:49:37.210863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:07.012 [2024-12-03 10:49:37.210870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:07.012 [2024-12-03 10:49:37.210876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.012 [2024-12-03 10:49:37.211604] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4066.606 ms, result 0 00:22:07.012 { 00:22:07.012 "name": "ftl0", 00:22:07.012 "uuid": "f590d54a-bc3c-45d1-a063-a9731ee34255" 00:22:07.012 } 00:22:07.012 10:49:37 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:07.012 10:49:37 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:07.012 10:49:37 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:07.012 10:49:37 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:07.012 10:49:37 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:07.012 /dev/nbd0 00:22:07.273 10:49:37 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:07.273 10:49:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:07.273 10:49:37 -- common/autotest_common.sh@867 -- # local i 00:22:07.273 10:49:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:07.273 10:49:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:07.273 10:49:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:07.273 10:49:37 -- common/autotest_common.sh@871 -- # break 00:22:07.273 10:49:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:07.273 10:49:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:07.273 10:49:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:07.273 1+0 records in 00:22:07.273 1+0 records out 00:22:07.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277197 s, 14.8 MB/s 00:22:07.273 10:49:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:07.273 10:49:37 -- common/autotest_common.sh@884 -- # size=4096 00:22:07.273 10:49:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:07.273 10:49:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:07.273 10:49:37 -- common/autotest_common.sh@887 -- # return 0 00:22:07.273 10:49:37 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:07.273 [2024-12-03 10:49:37.700982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:07.273 [2024-12-03 10:49:37.701101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76288 ] 00:22:07.273 [2024-12-03 10:49:37.849001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.534 [2024-12-03 10:49:38.021093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.920  [2024-12-03T10:49:40.470Z] Copying: 196/1024 [MB] (196 MBps) [2024-12-03T10:49:41.406Z] Copying: 443/1024 [MB] (247 MBps) [2024-12-03T10:49:42.342Z] Copying: 703/1024 [MB] (260 MBps) [2024-12-03T10:49:42.600Z] Copying: 952/1024 [MB] (249 MBps) [2024-12-03T10:49:43.167Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:22:12.554 00:22:12.554 10:49:43 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:14.467 10:49:44 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:14.467 [2024-12-03 10:49:44.813297] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:14.467 [2024-12-03 10:49:44.813385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76385 ] 00:22:14.467 [2024-12-03 10:49:44.952668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.735 [2024-12-03 10:49:45.131398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.126  [2024-12-03T10:49:47.682Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-03T10:49:48.625Z] Copying: 29/1024 [MB] (14 MBps) [2024-12-03T10:49:49.568Z] Copying: 51/1024 [MB] (22 MBps) [2024-12-03T10:49:50.508Z] Copying: 71/1024 [MB] (19 MBps) [2024-12-03T10:49:51.443Z] Copying: 95/1024 [MB] (23 MBps) [2024-12-03T10:49:52.377Z] Copying: 112/1024 [MB] (17 MBps) [2024-12-03T10:49:53.748Z] Copying: 131/1024 [MB] (18 MBps) [2024-12-03T10:49:54.681Z] Copying: 150/1024 [MB] (18 MBps) [2024-12-03T10:49:55.617Z] Copying: 165/1024 [MB] (15 MBps) [2024-12-03T10:49:56.547Z] Copying: 196/1024 [MB] (30 MBps) [2024-12-03T10:49:57.481Z] Copying: 219/1024 [MB] (23 MBps) [2024-12-03T10:49:58.416Z] Copying: 237/1024 [MB] (17 MBps) [2024-12-03T10:49:59.350Z] Copying: 253/1024 [MB] (16 MBps) [2024-12-03T10:50:00.739Z] Copying: 287/1024 [MB] (33 MBps) [2024-12-03T10:50:01.380Z] Copying: 315/1024 [MB] (27 MBps) [2024-12-03T10:50:02.765Z] Copying: 326/1024 [MB] (11 MBps) [2024-12-03T10:50:03.706Z] Copying: 343/1024 [MB] (16 MBps) [2024-12-03T10:50:04.641Z] Copying: 356/1024 [MB] (13 MBps) [2024-12-03T10:50:05.574Z] Copying: 374/1024 [MB] (18 MBps) [2024-12-03T10:50:06.507Z] Copying: 390/1024 [MB] (15 MBps) [2024-12-03T10:50:07.444Z] Copying: 406/1024 [MB] (15 MBps) [2024-12-03T10:50:08.379Z] Copying: 424/1024 [MB] (18 MBps) [2024-12-03T10:50:09.753Z] Copying: 438/1024 [MB] (14 MBps) [2024-12-03T10:50:10.687Z] Copying: 449/1024 [MB] (11 MBps) [2024-12-03T10:50:11.622Z] Copying: 468/1024 [MB] (18 MBps) [2024-12-03T10:50:12.556Z] Copying: 487/1024 [MB] (19 MBps) [2024-12-03T10:50:13.494Z] Copying: 506/1024 [MB] (18 MBps) [2024-12-03T10:50:14.424Z] Copying: 524/1024 [MB] (17 MBps) [2024-12-03T10:50:15.355Z] Copying: 543/1024 [MB] (19 MBps) [2024-12-03T10:50:16.723Z] Copying: 559/1024 [MB] (15 MBps) [2024-12-03T10:50:17.658Z] Copying: 578/1024 [MB] (19 MBps) [2024-12-03T10:50:18.592Z] Copying: 597/1024 [MB] (19 MBps) [2024-12-03T10:50:19.527Z] Copying: 614/1024 [MB] (16 MBps) [2024-12-03T10:50:20.461Z] Copying: 625/1024 [MB] (11 MBps) [2024-12-03T10:50:21.404Z] Copying: 644/1024 [MB] (18 MBps) [2024-12-03T10:50:22.348Z] Copying: 675/1024 [MB] (31 MBps) [2024-12-03T10:50:23.727Z] Copying: 707/1024 [MB] (32 MBps) [2024-12-03T10:50:24.662Z] Copying: 738/1024 [MB] (30 MBps) [2024-12-03T10:50:25.595Z] Copying: 772/1024 [MB] (34 MBps) [2024-12-03T10:50:26.530Z] Copying: 807/1024 [MB] (35 MBps) [2024-12-03T10:50:27.465Z] Copying: 829/1024 [MB] (21 MBps) [2024-12-03T10:50:28.400Z] Copying: 843/1024 [MB] (13 MBps) [2024-12-03T10:50:29.777Z] Copying: 858/1024 [MB] (15 MBps) [2024-12-03T10:50:30.366Z] Copying: 871/1024 [MB] (13 MBps) [2024-12-03T10:50:31.754Z] Copying: 890/1024 [MB] (18 MBps) [2024-12-03T10:50:32.690Z] Copying: 907/1024 [MB] (17 MBps) [2024-12-03T10:50:33.633Z] Copying: 923/1024 [MB] (16 MBps) [2024-12-03T10:50:34.575Z] Copying: 940/1024 [MB] (17 MBps) [2024-12-03T10:50:35.509Z] Copying: 965/1024 [MB] (24 MBps) [2024-12-03T10:50:36.444Z] Copying: 982/1024 [MB] (16 MBps) [2024-12-03T10:50:37.378Z] Copying: 1001/1024 [MB] (19 MBps) [2024-12-03T10:50:37.378Z] Copying: 1023/1024 [MB] (22 MBps) [2024-12-03T10:50:38.317Z] Copying: 1024/1024 [MB] (average 19 MBps) 00:23:07.704 00:23:07.704 10:50:38 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:07.704 10:50:38 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:07.704 10:50:38 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:07.966 [2024-12-03 10:50:38.432966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.966 [2024-12-03 10:50:38.433004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:07.966 [2024-12-03 10:50:38.433016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:07.966 [2024-12-03 10:50:38.433023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.966 [2024-12-03 10:50:38.433041] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:07.966 [2024-12-03 10:50:38.435131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.966 [2024-12-03 10:50:38.435172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:07.966 [2024-12-03 10:50:38.435182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.075 ms 00:23:07.966 [2024-12-03 10:50:38.435188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.966 [2024-12-03 10:50:38.438216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.966 [2024-12-03 10:50:38.438318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:07.966 [2024-12-03 10:50:38.438373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.983 ms 00:23:07.967 [2024-12-03 10:50:38.438398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.967 [2024-12-03 10:50:38.459655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.967 [2024-12-03 10:50:38.459683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:07.967 [2024-12-03 10:50:38.459697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.205 ms 00:23:07.967 [2024-12-03 10:50:38.459705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.967 [2024-12-03 10:50:38.465806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.967 [2024-12-03 10:50:38.465828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:07.967 [2024-12-03 10:50:38.465841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.065 ms 00:23:07.967 [2024-12-03 10:50:38.465850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.967 [2024-12-03 10:50:38.490428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.967 [2024-12-03 10:50:38.490457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:07.967 [2024-12-03 10:50:38.490469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.500 ms 00:23:07.967 [2024-12-03 10:50:38.490477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.967 [2024-12-03 10:50:38.513917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.967 [2024-12-03 10:50:38.513956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:07.967 [2024-12-03 10:50:38.513971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.401 ms 00:23:07.967 [2024-12-03 10:50:38.513980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.967 [2024-12-03 10:50:38.514161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.967 [2024-12-03 10:50:38.514174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:07.967 [2024-12-03 10:50:38.514184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:23:07.967 [2024-12-03 10:50:38.514192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.967 [2024-12-03 10:50:38.538277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.967 [2024-12-03 10:50:38.538308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:07.967 [2024-12-03 10:50:38.538320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.061 ms 00:23:07.967 [2024-12-03 10:50:38.538327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.967 [2024-12-03 10:50:38.562232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.967 [2024-12-03 10:50:38.562262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:07.967 [2024-12-03 10:50:38.562274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.863 ms 00:23:07.967 [2024-12-03 10:50:38.562282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.230 [2024-12-03 10:50:38.585503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.230 [2024-12-03 10:50:38.585531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:08.230 [2024-12-03 10:50:38.585543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.179 ms 00:23:08.230 [2024-12-03 10:50:38.585551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.230 [2024-12-03 10:50:38.609254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.230 [2024-12-03 10:50:38.609281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:08.230 [2024-12-03 10:50:38.609293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.623 ms 00:23:08.230 [2024-12-03 10:50:38.609300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.230 [2024-12-03 10:50:38.609341] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:08.230 [2024-12-03 10:50:38.609355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:08.230 [2024-12-03 10:50:38.609688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.609996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:08.231 [2024-12-03 10:50:38.610230] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:08.231 [2024-12-03 10:50:38.610240] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f590d54a-bc3c-45d1-a063-a9731ee34255 00:23:08.231 [2024-12-03 10:50:38.610250] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:08.231 [2024-12-03 10:50:38.610259] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:08.231 [2024-12-03 10:50:38.610265] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:08.231 [2024-12-03 10:50:38.610274] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:08.231 [2024-12-03 10:50:38.610280] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:08.231 [2024-12-03 10:50:38.610289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:08.231 [2024-12-03 10:50:38.610298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:08.231 [2024-12-03 10:50:38.610306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:08.232 [2024-12-03 10:50:38.610312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:08.232 [2024-12-03 10:50:38.610322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.232 [2024-12-03 10:50:38.610329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:08.232 [2024-12-03 10:50:38.610339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:23:08.232 [2024-12-03 10:50:38.610346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.623318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.232 [2024-12-03 10:50:38.623345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:08.232 [2024-12-03 10:50:38.623357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.938 ms 00:23:08.232 [2024-12-03 10:50:38.623365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.623569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.232 [2024-12-03 10:50:38.623578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:08.232 [2024-12-03 10:50:38.623588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:23:08.232 [2024-12-03 10:50:38.623595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.670473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.232 [2024-12-03 10:50:38.670507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:08.232 [2024-12-03 10:50:38.670519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.232 [2024-12-03 10:50:38.670528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.670600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.232 [2024-12-03 10:50:38.670608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:08.232 [2024-12-03 10:50:38.670618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.232 [2024-12-03 10:50:38.670626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.670697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.232 [2024-12-03 10:50:38.670708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:08.232 [2024-12-03 10:50:38.670717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.232 [2024-12-03 10:50:38.670725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.670745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.232 [2024-12-03 10:50:38.670753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:08.232 [2024-12-03 10:50:38.670762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.232 [2024-12-03 10:50:38.670769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.751836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.232 [2024-12-03 10:50:38.751880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:08.232 [2024-12-03 10:50:38.751895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.232 [2024-12-03 10:50:38.751903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.784384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.232 [2024-12-03 10:50:38.784428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:08.232 [2024-12-03 10:50:38.784443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.232 [2024-12-03 10:50:38.784452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.784534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.232 [2024-12-03 10:50:38.784545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:08.232 [2024-12-03 10:50:38.784556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.232 [2024-12-03 10:50:38.784564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.784615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.232 [2024-12-03 10:50:38.784626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:08.232 [2024-12-03 10:50:38.784637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.232 [2024-12-03 10:50:38.784644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.784758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.232 [2024-12-03 10:50:38.784772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:08.232 [2024-12-03 10:50:38.784783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.232 [2024-12-03 10:50:38.784791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.784836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.232 [2024-12-03 10:50:38.784846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:08.232 [2024-12-03 10:50:38.784856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.232 [2024-12-03 10:50:38.784864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.784910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.232 [2024-12-03 10:50:38.784923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:08.232 [2024-12-03 10:50:38.784934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.232 [2024-12-03 10:50:38.784941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.784995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:08.232 [2024-12-03 10:50:38.785007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:08.232 [2024-12-03 10:50:38.785018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:08.232 [2024-12-03 10:50:38.785027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.232 [2024-12-03 10:50:38.785232] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 352.206 ms, result 0 00:23:08.232 true 00:23:08.232 10:50:38 -- ftl/dirty_shutdown.sh@83 -- # kill -9 76123 00:23:08.232 10:50:38 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76123 00:23:08.232 10:50:38 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:08.494 [2024-12-03 10:50:38.883175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:08.494 [2024-12-03 10:50:38.883328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76952 ] 00:23:08.494 [2024-12-03 10:50:39.036766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.754 [2024-12-03 10:50:39.258636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.142  [2024-12-03T10:50:41.697Z] Copying: 183/1024 [MB] (183 MBps) [2024-12-03T10:50:42.632Z] Copying: 372/1024 [MB] (188 MBps) [2024-12-03T10:50:43.566Z] Copying: 627/1024 [MB] (255 MBps) [2024-12-03T10:50:44.133Z] Copying: 881/1024 [MB] (253 MBps) [2024-12-03T10:50:45.069Z] Copying: 1024/1024 [MB] (average 223 MBps) 00:23:14.456 00:23:14.456 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76123 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:14.456 10:50:44 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:14.456 [2024-12-03 10:50:44.849084] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:14.456 [2024-12-03 10:50:44.849191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77016 ] 00:23:14.456 [2024-12-03 10:50:44.994649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.714 [2024-12-03 10:50:45.157884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.971 [2024-12-03 10:50:45.385730] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.971 [2024-12-03 10:50:45.385791] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.971 [2024-12-03 10:50:45.446568] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:14.971 [2024-12-03 10:50:45.447318] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:14.971 [2024-12-03 10:50:45.447754] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:15.231 [2024-12-03 10:50:45.780770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.231 [2024-12-03 10:50:45.780801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:15.231 [2024-12-03 10:50:45.780812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:15.231 [2024-12-03 10:50:45.780818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.231 [2024-12-03 10:50:45.780854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.231 [2024-12-03 10:50:45.780862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:15.231 [2024-12-03 10:50:45.780870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:15.231 [2024-12-03 10:50:45.780876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.231 [2024-12-03 10:50:45.780889] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:15.231 [2024-12-03 10:50:45.781451] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:15.231 [2024-12-03 10:50:45.781470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.231 [2024-12-03 10:50:45.781476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:15.231 [2024-12-03 10:50:45.781483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:23:15.231 [2024-12-03 10:50:45.781488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.231 [2024-12-03 10:50:45.782744] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:15.231 [2024-12-03 10:50:45.793305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.231 [2024-12-03 10:50:45.793333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:15.231 [2024-12-03 10:50:45.793342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.563 ms 00:23:15.231 [2024-12-03 10:50:45.793348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.231 [2024-12-03 10:50:45.793392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.231 [2024-12-03 10:50:45.793401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:15.231 [2024-12-03 10:50:45.793407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:15.231 [2024-12-03 10:50:45.793413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.231 [2024-12-03 10:50:45.799669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.231 [2024-12-03 10:50:45.799693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:15.231 [2024-12-03 10:50:45.799701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.219 ms 00:23:15.231 [2024-12-03 10:50:45.799706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.231 [2024-12-03 10:50:45.799774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.231 [2024-12-03 10:50:45.799781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:15.231 [2024-12-03 10:50:45.799788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:15.231 [2024-12-03 10:50:45.799794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.231 [2024-12-03 10:50:45.799830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.231 [2024-12-03 10:50:45.799838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:15.231 [2024-12-03 10:50:45.799845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:15.231 [2024-12-03 10:50:45.799851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.231 [2024-12-03 10:50:45.799872] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:15.231 [2024-12-03 10:50:45.802940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.231 [2024-12-03 10:50:45.803114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:15.231 [2024-12-03 10:50:45.803128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.077 ms 00:23:15.231 [2024-12-03 10:50:45.803134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.231 [2024-12-03 10:50:45.803165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.231 [2024-12-03 10:50:45.803171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:15.231 [2024-12-03 10:50:45.803177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:15.231 [2024-12-03 10:50:45.803183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.231 [2024-12-03 10:50:45.803199] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:15.231 [2024-12-03 10:50:45.803216] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:15.231 [2024-12-03 10:50:45.803244] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:15.231 [2024-12-03 10:50:45.803258] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:15.231 [2024-12-03 10:50:45.803333] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:15.231 [2024-12-03 10:50:45.803342] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:15.231 [2024-12-03 10:50:45.803349] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:15.231 [2024-12-03 10:50:45.803357] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:15.231 [2024-12-03 10:50:45.803365] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:15.232 [2024-12-03 10:50:45.803371] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:15.232 [2024-12-03 10:50:45.803376] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:15.232 [2024-12-03 10:50:45.803382] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:15.232 [2024-12-03 10:50:45.803388] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:15.232 [2024-12-03 10:50:45.803396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.232 [2024-12-03 10:50:45.803402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:15.232 [2024-12-03 10:50:45.803408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:23:15.232 [2024-12-03 10:50:45.803415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.232 [2024-12-03 10:50:45.803461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.232 [2024-12-03 10:50:45.803468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:15.232 [2024-12-03 10:50:45.803473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:15.232 [2024-12-03 10:50:45.803480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.232 [2024-12-03 10:50:45.803534] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:15.232 [2024-12-03 10:50:45.803542] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:15.232 [2024-12-03 10:50:45.803551] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:15.232 [2024-12-03 10:50:45.803557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.232 [2024-12-03 10:50:45.803563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:15.232 [2024-12-03 10:50:45.803568] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:15.232 [2024-12-03 10:50:45.803573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:15.232 [2024-12-03 10:50:45.803580] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:15.232 [2024-12-03 10:50:45.803587] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:15.232 [2024-12-03 10:50:45.803592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:15.232 [2024-12-03 10:50:45.803601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:15.232 [2024-12-03 10:50:45.803606] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:15.232 [2024-12-03 10:50:45.803616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:15.232 [2024-12-03 10:50:45.803621] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:15.232 [2024-12-03 10:50:45.803626] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:15.232 [2024-12-03 10:50:45.803631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.232 [2024-12-03 10:50:45.803637] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:15.232 [2024-12-03 10:50:45.803642] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:15.232 [2024-12-03 10:50:45.803647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.232 [2024-12-03 10:50:45.803652] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:15.232 [2024-12-03 10:50:45.803657] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:15.232 [2024-12-03 10:50:45.803662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:15.232 [2024-12-03 10:50:45.803667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:15.232 [2024-12-03 10:50:45.803672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:15.232 [2024-12-03 10:50:45.803677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:15.232 [2024-12-03 10:50:45.803682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:15.232 [2024-12-03 10:50:45.803687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:15.232 [2024-12-03 10:50:45.803692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:15.232 [2024-12-03 10:50:45.803697] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:15.232 [2024-12-03 10:50:45.803703] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:15.232 [2024-12-03 10:50:45.803708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:15.232 [2024-12-03 10:50:45.803713] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:15.232 [2024-12-03 10:50:45.803718] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:15.232 [2024-12-03 10:50:45.803723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:15.232 [2024-12-03 10:50:45.803728] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:15.232 [2024-12-03 10:50:45.803732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:15.232 [2024-12-03 10:50:45.803737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:15.232 [2024-12-03 10:50:45.803742] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:15.232 [2024-12-03 10:50:45.803748] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:15.232 [2024-12-03 10:50:45.803752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:15.232 [2024-12-03 10:50:45.803757] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:15.232 [2024-12-03 10:50:45.803763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:15.232 [2024-12-03 10:50:45.803770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:15.232 [2024-12-03 10:50:45.803776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.232 [2024-12-03 10:50:45.803782] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:15.232 [2024-12-03 10:50:45.803788] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:15.232 [2024-12-03 10:50:45.803793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:15.232 [2024-12-03 10:50:45.803799] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:15.232 [2024-12-03 10:50:45.803804] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:15.232 [2024-12-03 10:50:45.803809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:15.232 [2024-12-03 10:50:45.803815] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:15.232 [2024-12-03 10:50:45.803822] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:15.232 [2024-12-03 10:50:45.803829] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:15.232 [2024-12-03 10:50:45.803834] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:15.232 [2024-12-03 10:50:45.803839] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:15.232 [2024-12-03 10:50:45.803845] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:15.232 [2024-12-03 10:50:45.803851] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:15.232 [2024-12-03 10:50:45.803856] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:15.232 [2024-12-03 10:50:45.803862] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:15.232 [2024-12-03 10:50:45.803867] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:15.232 [2024-12-03 10:50:45.803872] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:15.232 [2024-12-03 10:50:45.803877] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:15.232 [2024-12-03 10:50:45.803883] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:15.232 [2024-12-03 10:50:45.803888] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:15.233 [2024-12-03 10:50:45.803895] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:15.233 [2024-12-03 10:50:45.803900] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:15.233 [2024-12-03 10:50:45.803907] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:15.233 [2024-12-03 10:50:45.803914] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:15.233 [2024-12-03 10:50:45.803920] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:15.233 [2024-12-03 10:50:45.803925] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:15.233 [2024-12-03 10:50:45.803930] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:15.233 [2024-12-03 10:50:45.803936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.233 [2024-12-03 10:50:45.803943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:15.233 [2024-12-03 10:50:45.803949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:23:15.233 [2024-12-03 10:50:45.803957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.233 [2024-12-03 10:50:45.818196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.233 [2024-12-03 10:50:45.818305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:15.233 [2024-12-03 10:50:45.818355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.202 ms 00:23:15.233 [2024-12-03 10:50:45.818377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.233 [2024-12-03 10:50:45.818727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.233 [2024-12-03 10:50:45.818800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:15.233 [2024-12-03 10:50:45.818869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:15.233 [2024-12-03 10:50:45.818888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.861645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.861741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:15.494 [2024-12-03 10:50:45.861790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.681 ms 00:23:15.494 [2024-12-03 10:50:45.861809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.861894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.861916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:15.494 [2024-12-03 10:50:45.861933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:15.494 [2024-12-03 10:50:45.861952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.862392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.862429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:15.494 [2024-12-03 10:50:45.862447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:23:15.494 [2024-12-03 10:50:45.862465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.862573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.862594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:15.494 [2024-12-03 10:50:45.862610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:23:15.494 [2024-12-03 10:50:45.862625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.875182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.875265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:15.494 [2024-12-03 10:50:45.875315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.532 ms 00:23:15.494 [2024-12-03 10:50:45.875348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.886427] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:15.494 [2024-12-03 10:50:45.886522] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:15.494 [2024-12-03 10:50:45.886565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.886582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:15.494 [2024-12-03 10:50:45.886597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.130 ms 00:23:15.494 [2024-12-03 10:50:45.886611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.905670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.905755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:15.494 [2024-12-03 10:50:45.905799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.025 ms 00:23:15.494 [2024-12-03 10:50:45.905816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.915402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.915487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:15.494 [2024-12-03 10:50:45.915526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.512 ms 00:23:15.494 [2024-12-03 10:50:45.915550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.924942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.925020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:15.494 [2024-12-03 10:50:45.925063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.361 ms 00:23:15.494 [2024-12-03 10:50:45.925080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.925356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.925465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:15.494 [2024-12-03 10:50:45.925510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:23:15.494 [2024-12-03 10:50:45.925526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.974308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.974411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:15.494 [2024-12-03 10:50:45.974505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.755 ms 00:23:15.494 [2024-12-03 10:50:45.974530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.982815] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:15.494 [2024-12-03 10:50:45.984808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.984884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:15.494 [2024-12-03 10:50:45.984924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.236 ms 00:23:15.494 [2024-12-03 10:50:45.984943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.985002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.985021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:15.494 [2024-12-03 10:50:45.985037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:15.494 [2024-12-03 10:50:45.985052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.985134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.985155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:15.494 [2024-12-03 10:50:45.985209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:15.494 [2024-12-03 10:50:45.985227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.986264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.986339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:15.494 [2024-12-03 10:50:45.986380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:23:15.494 [2024-12-03 10:50:45.986402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.986434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.494 [2024-12-03 10:50:45.986451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:15.494 [2024-12-03 10:50:45.986470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:15.494 [2024-12-03 10:50:45.986484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.494 [2024-12-03 10:50:45.986521] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:15.495 [2024-12-03 10:50:45.986539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.495 [2024-12-03 10:50:45.986554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:15.495 [2024-12-03 10:50:45.986590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:15.495 [2024-12-03 10:50:45.986608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.495 [2024-12-03 10:50:46.005709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.495 [2024-12-03 10:50:46.005798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:15.495 [2024-12-03 10:50:46.005836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.077 ms 00:23:15.495 [2024-12-03 10:50:46.005854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.495 [2024-12-03 10:50:46.006133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.495 [2024-12-03 10:50:46.006229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:15.495 [2024-12-03 10:50:46.006337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:15.495 [2024-12-03 10:50:46.006357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.495 [2024-12-03 10:50:46.007273] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 226.112 ms, result 0 00:23:16.439  [2024-12-03T10:50:48.433Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-03T10:50:49.375Z] Copying: 36/1024 [MB] (20 MBps) [2024-12-03T10:50:50.319Z] Copying: 51/1024 [MB] (15 MBps) [2024-12-03T10:50:51.262Z] Copying: 64/1024 [MB] (12 MBps) [2024-12-03T10:50:52.210Z] Copying: 106/1024 [MB] (42 MBps) [2024-12-03T10:50:53.156Z] Copying: 137/1024 [MB] (30 MBps) [2024-12-03T10:50:54.099Z] Copying: 147/1024 [MB] (10 MBps) [2024-12-03T10:50:55.041Z] Copying: 158/1024 [MB] (10 MBps) [2024-12-03T10:50:56.428Z] Copying: 179/1024 [MB] (20 MBps) [2024-12-03T10:50:57.364Z] Copying: 198/1024 [MB] (19 MBps) [2024-12-03T10:50:58.359Z] Copying: 216/1024 [MB] (18 MBps) [2024-12-03T10:50:59.322Z] Copying: 242/1024 [MB] (26 MBps) [2024-12-03T10:51:00.280Z] Copying: 256/1024 [MB] (13 MBps) [2024-12-03T10:51:01.221Z] Copying: 275/1024 [MB] (19 MBps) [2024-12-03T10:51:02.164Z] Copying: 296/1024 [MB] (20 MBps) [2024-12-03T10:51:03.109Z] Copying: 317/1024 [MB] (20 MBps) [2024-12-03T10:51:04.054Z] Copying: 327/1024 [MB] (10 MBps) [2024-12-03T10:51:05.438Z] Copying: 337/1024 [MB] (10 MBps) [2024-12-03T10:51:06.377Z] Copying: 348/1024 [MB] (10 MBps) [2024-12-03T10:51:07.319Z] Copying: 359/1024 [MB] (10 MBps) [2024-12-03T10:51:08.259Z] Copying: 369/1024 [MB] (10 MBps) [2024-12-03T10:51:09.200Z] Copying: 380/1024 [MB] (10 MBps) [2024-12-03T10:51:10.140Z] Copying: 391/1024 [MB] (10 MBps) [2024-12-03T10:51:11.081Z] Copying: 401/1024 [MB] (10 MBps) [2024-12-03T10:51:12.023Z] Copying: 411/1024 [MB] (10 MBps) [2024-12-03T10:51:13.409Z] Copying: 422/1024 [MB] (10 MBps) [2024-12-03T10:51:14.352Z] Copying: 432/1024 [MB] (10 MBps) [2024-12-03T10:51:15.294Z] Copying: 443/1024 [MB] (10 MBps) [2024-12-03T10:51:16.235Z] Copying: 453/1024 [MB] (10 MBps) [2024-12-03T10:51:17.192Z] Copying: 463/1024 [MB] (10 MBps) [2024-12-03T10:51:18.133Z] Copying: 474/1024 [MB] (10 MBps) [2024-12-03T10:51:19.075Z] Copying: 484/1024 [MB] (10 MBps) [2024-12-03T10:51:20.459Z] Copying: 495/1024 [MB] (10 MBps) [2024-12-03T10:51:21.032Z] Copying: 506/1024 [MB] (11 MBps) [2024-12-03T10:51:22.414Z] Copying: 517/1024 [MB] (10 MBps) [2024-12-03T10:51:23.355Z] Copying: 533/1024 [MB] (15 MBps) [2024-12-03T10:51:24.294Z] Copying: 557/1024 [MB] (24 MBps) [2024-12-03T10:51:25.235Z] Copying: 569/1024 [MB] (12 MBps) [2024-12-03T10:51:26.174Z] Copying: 582/1024 [MB] (12 MBps) [2024-12-03T10:51:27.135Z] Copying: 596/1024 [MB] (14 MBps) [2024-12-03T10:51:28.143Z] Copying: 609/1024 [MB] (13 MBps) [2024-12-03T10:51:29.084Z] Copying: 624/1024 [MB] (14 MBps) [2024-12-03T10:51:30.026Z] Copying: 637/1024 [MB] (12 MBps) [2024-12-03T10:51:31.411Z] Copying: 652/1024 [MB] (14 MBps) [2024-12-03T10:51:32.354Z] Copying: 669/1024 [MB] (17 MBps) [2024-12-03T10:51:33.299Z] Copying: 684/1024 [MB] (15 MBps) [2024-12-03T10:51:34.243Z] Copying: 701/1024 [MB] (16 MBps) [2024-12-03T10:51:35.185Z] Copying: 717/1024 [MB] (15 MBps) [2024-12-03T10:51:36.127Z] Copying: 731/1024 [MB] (14 MBps) [2024-12-03T10:51:37.070Z] Copying: 747/1024 [MB] (15 MBps) [2024-12-03T10:51:38.456Z] Copying: 763/1024 [MB] (16 MBps) [2024-12-03T10:51:39.028Z] Copying: 777/1024 [MB] (14 MBps) [2024-12-03T10:51:40.414Z] Copying: 795/1024 [MB] (18 MBps) [2024-12-03T10:51:41.356Z] Copying: 814/1024 [MB] (19 MBps) [2024-12-03T10:51:42.299Z] Copying: 826/1024 [MB] (11 MBps) [2024-12-03T10:51:43.250Z] Copying: 844/1024 [MB] (18 MBps) [2024-12-03T10:51:44.186Z] Copying: 861/1024 [MB] (17 MBps) [2024-12-03T10:51:45.123Z] Copying: 878/1024 [MB] (16 MBps) [2024-12-03T10:51:46.060Z] Copying: 894/1024 [MB] (16 MBps) [2024-12-03T10:51:47.440Z] Copying: 911/1024 [MB] (16 MBps) [2024-12-03T10:51:48.383Z] Copying: 929/1024 [MB] (18 MBps) [2024-12-03T10:51:49.325Z] Copying: 961528/1048576 [kB] (10140 kBps) [2024-12-03T10:51:50.266Z] Copying: 971656/1048576 [kB] (10128 kBps) [2024-12-03T10:51:51.209Z] Copying: 959/1024 [MB] (10 MBps) [2024-12-03T10:51:52.149Z] Copying: 969/1024 [MB] (10 MBps) [2024-12-03T10:51:53.088Z] Copying: 980/1024 [MB] (10 MBps) [2024-12-03T10:51:54.029Z] Copying: 992/1024 [MB] (11 MBps) [2024-12-03T10:51:55.413Z] Copying: 1003/1024 [MB] (11 MBps) [2024-12-03T10:51:56.376Z] Copying: 1014/1024 [MB] (11 MBps) [2024-12-03T10:51:56.956Z] Copying: 1047840/1048576 [kB] (8808 kBps) [2024-12-03T10:51:56.956Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-03 10:51:56.721587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.343 [2024-12-03 10:51:56.721648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:26.343 [2024-12-03 10:51:56.721663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:26.343 [2024-12-03 10:51:56.721672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.343 [2024-12-03 10:51:56.725815] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:26.343 [2024-12-03 10:51:56.729119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.343 [2024-12-03 10:51:56.729155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:26.343 [2024-12-03 10:51:56.729175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:24:26.344 [2024-12-03 10:51:56.729185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.344 [2024-12-03 10:51:56.741823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.344 [2024-12-03 10:51:56.741866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:26.344 [2024-12-03 10:51:56.741878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.077 ms 00:24:26.344 [2024-12-03 10:51:56.741886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.344 [2024-12-03 10:51:56.763109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.344 [2024-12-03 10:51:56.763147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:26.344 [2024-12-03 10:51:56.763158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.206 ms 00:24:26.344 [2024-12-03 10:51:56.763166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.344 [2024-12-03 10:51:56.769324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.344 [2024-12-03 10:51:56.769357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:26.344 [2024-12-03 10:51:56.769367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.118 ms 00:24:26.344 [2024-12-03 10:51:56.769375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.344 [2024-12-03 10:51:56.795755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.344 [2024-12-03 10:51:56.795800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:26.344 [2024-12-03 10:51:56.795813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.335 ms 00:24:26.344 [2024-12-03 10:51:56.795820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.344 [2024-12-03 10:51:56.812237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.344 [2024-12-03 10:51:56.812281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:26.344 [2024-12-03 10:51:56.812295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.374 ms 00:24:26.344 [2024-12-03 10:51:56.812303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.606 [2024-12-03 10:51:57.085809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.606 [2024-12-03 10:51:57.085871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:26.606 [2024-12-03 10:51:57.085884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 273.454 ms 00:24:26.607 [2024-12-03 10:51:57.085893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-03 10:51:57.112229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.607 [2024-12-03 10:51:57.112273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:26.607 [2024-12-03 10:51:57.112285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.315 ms 00:24:26.607 [2024-12-03 10:51:57.112293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-03 10:51:57.137912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.607 [2024-12-03 10:51:57.137957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:26.607 [2024-12-03 10:51:57.137969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.574 ms 00:24:26.607 [2024-12-03 10:51:57.137976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-03 10:51:57.163271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.607 [2024-12-03 10:51:57.163465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:26.607 [2024-12-03 10:51:57.163488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.252 ms 00:24:26.607 [2024-12-03 10:51:57.163495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-03 10:51:57.188778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.607 [2024-12-03 10:51:57.188830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:26.607 [2024-12-03 10:51:57.188845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.915 ms 00:24:26.607 [2024-12-03 10:51:57.188852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.607 [2024-12-03 10:51:57.188896] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:26.607 [2024-12-03 10:51:57.188911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 97792 / 261120 wr_cnt: 1 state: open 00:24:26.607 [2024-12-03 10:51:57.188922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.188931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.188938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.188946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.188954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.188962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.188970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.188979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.188986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.188994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:26.607 [2024-12-03 10:51:57.189346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:26.608 [2024-12-03 10:51:57.189749] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:26.608 [2024-12-03 10:51:57.189761] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f590d54a-bc3c-45d1-a063-a9731ee34255 00:24:26.608 [2024-12-03 10:51:57.189770] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 97792 00:24:26.608 [2024-12-03 10:51:57.189777] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 98752 00:24:26.608 [2024-12-03 10:51:57.189785] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 97792 00:24:26.608 [2024-12-03 10:51:57.189800] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0098 00:24:26.608 [2024-12-03 10:51:57.189808] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:26.608 [2024-12-03 10:51:57.189816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:26.608 [2024-12-03 10:51:57.189824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:26.608 [2024-12-03 10:51:57.189831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:26.608 [2024-12-03 10:51:57.189841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:26.608 [2024-12-03 10:51:57.189849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.608 [2024-12-03 10:51:57.189858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:26.608 [2024-12-03 10:51:57.189866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:24:26.608 [2024-12-03 10:51:57.189874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.608 [2024-12-03 10:51:57.203711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.608 [2024-12-03 10:51:57.203750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:26.608 [2024-12-03 10:51:57.203762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.797 ms 00:24:26.608 [2024-12-03 10:51:57.203771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.608 [2024-12-03 10:51:57.203994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.608 [2024-12-03 10:51:57.204004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:26.608 [2024-12-03 10:51:57.204019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:24:26.608 [2024-12-03 10:51:57.204027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.871 [2024-12-03 10:51:57.242517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.871 [2024-12-03 10:51:57.242709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.871 [2024-12-03 10:51:57.242729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.871 [2024-12-03 10:51:57.242738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.871 [2024-12-03 10:51:57.242801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.871 [2024-12-03 10:51:57.242811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.871 [2024-12-03 10:51:57.242826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.871 [2024-12-03 10:51:57.242834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.871 [2024-12-03 10:51:57.242906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.871 [2024-12-03 10:51:57.242915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.871 [2024-12-03 10:51:57.242924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.871 [2024-12-03 10:51:57.242931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.871 [2024-12-03 10:51:57.242948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.871 [2024-12-03 10:51:57.242956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.871 [2024-12-03 10:51:57.242963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.871 [2024-12-03 10:51:57.242975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.871 [2024-12-03 10:51:57.323997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.871 [2024-12-03 10:51:57.324047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.871 [2024-12-03 10:51:57.324087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.871 [2024-12-03 10:51:57.324096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.871 [2024-12-03 10:51:57.356220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.871 [2024-12-03 10:51:57.356264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:26.871 [2024-12-03 10:51:57.356281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.871 [2024-12-03 10:51:57.356289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.871 [2024-12-03 10:51:57.356351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.871 [2024-12-03 10:51:57.356361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:26.871 [2024-12-03 10:51:57.356370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.871 [2024-12-03 10:51:57.356378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.871 [2024-12-03 10:51:57.356421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.871 [2024-12-03 10:51:57.356430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:26.871 [2024-12-03 10:51:57.356439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.871 [2024-12-03 10:51:57.356447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.871 [2024-12-03 10:51:57.356551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.871 [2024-12-03 10:51:57.356562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:26.871 [2024-12-03 10:51:57.356571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.871 [2024-12-03 10:51:57.356579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.871 [2024-12-03 10:51:57.356611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.871 [2024-12-03 10:51:57.356621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:26.871 [2024-12-03 10:51:57.356629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.871 [2024-12-03 10:51:57.356637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.871 [2024-12-03 10:51:57.356681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.871 [2024-12-03 10:51:57.356691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:26.871 [2024-12-03 10:51:57.356699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.871 [2024-12-03 10:51:57.356708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.871 [2024-12-03 10:51:57.356760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.871 [2024-12-03 10:51:57.356770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:26.871 [2024-12-03 10:51:57.356779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.871 [2024-12-03 10:51:57.356788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.871 [2024-12-03 10:51:57.356921] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 638.253 ms, result 0 00:24:28.257 00:24:28.257 00:24:28.518 10:51:58 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:30.431 10:52:00 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:30.431 [2024-12-03 10:52:00.949681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:30.431 [2024-12-03 10:52:00.949790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77795 ] 00:24:30.692 [2024-12-03 10:52:01.098895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.953 [2024-12-03 10:52:01.313674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.216 [2024-12-03 10:52:01.572148] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:31.216 [2024-12-03 10:52:01.572472] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:31.216 [2024-12-03 10:52:01.728000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.216 [2024-12-03 10:52:01.728082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:31.216 [2024-12-03 10:52:01.728098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:31.216 [2024-12-03 10:52:01.728110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.216 [2024-12-03 10:52:01.728164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.216 [2024-12-03 10:52:01.728175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:31.216 [2024-12-03 10:52:01.728184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:31.216 [2024-12-03 10:52:01.728192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.216 [2024-12-03 10:52:01.728212] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:31.216 [2024-12-03 10:52:01.729300] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:31.216 [2024-12-03 10:52:01.729351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.216 [2024-12-03 10:52:01.729362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:31.216 [2024-12-03 10:52:01.729372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.143 ms 00:24:31.216 [2024-12-03 10:52:01.729380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.216 [2024-12-03 10:52:01.731091] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:31.216 [2024-12-03 10:52:01.745374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.216 [2024-12-03 10:52:01.745424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:31.216 [2024-12-03 10:52:01.745438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.285 ms 00:24:31.216 [2024-12-03 10:52:01.745446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.216 [2024-12-03 10:52:01.745521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.216 [2024-12-03 10:52:01.745530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:31.216 [2024-12-03 10:52:01.745540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:31.216 [2024-12-03 10:52:01.745547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.216 [2024-12-03 10:52:01.753551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.216 [2024-12-03 10:52:01.753591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:31.216 [2024-12-03 10:52:01.753601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.926 ms 00:24:31.216 [2024-12-03 10:52:01.753609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.216 [2024-12-03 10:52:01.753703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.216 [2024-12-03 10:52:01.753713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:31.216 [2024-12-03 10:52:01.753722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:31.216 [2024-12-03 10:52:01.753729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.216 [2024-12-03 10:52:01.753776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.216 [2024-12-03 10:52:01.753786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:31.216 [2024-12-03 10:52:01.753794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:31.216 [2024-12-03 10:52:01.753801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.216 [2024-12-03 10:52:01.753832] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:31.216 [2024-12-03 10:52:01.758125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.216 [2024-12-03 10:52:01.758160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:31.216 [2024-12-03 10:52:01.758172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.306 ms 00:24:31.216 [2024-12-03 10:52:01.758180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.216 [2024-12-03 10:52:01.758218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.216 [2024-12-03 10:52:01.758226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:31.216 [2024-12-03 10:52:01.758235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:31.216 [2024-12-03 10:52:01.758246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.216 [2024-12-03 10:52:01.758294] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:31.216 [2024-12-03 10:52:01.758317] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:31.216 [2024-12-03 10:52:01.758353] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:31.216 [2024-12-03 10:52:01.758370] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:31.216 [2024-12-03 10:52:01.758446] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:31.216 [2024-12-03 10:52:01.758456] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:31.216 [2024-12-03 10:52:01.758469] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:31.216 [2024-12-03 10:52:01.758480] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:31.216 [2024-12-03 10:52:01.758489] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:31.216 [2024-12-03 10:52:01.758498] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:31.216 [2024-12-03 10:52:01.758505] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:31.216 [2024-12-03 10:52:01.758512] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:31.216 [2024-12-03 10:52:01.758519] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:31.216 [2024-12-03 10:52:01.758528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.216 [2024-12-03 10:52:01.758536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:31.216 [2024-12-03 10:52:01.758543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:24:31.216 [2024-12-03 10:52:01.758551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.216 [2024-12-03 10:52:01.758613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.216 [2024-12-03 10:52:01.758621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:31.216 [2024-12-03 10:52:01.758628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:31.216 [2024-12-03 10:52:01.758635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.216 [2024-12-03 10:52:01.758706] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:31.216 [2024-12-03 10:52:01.758716] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:31.216 [2024-12-03 10:52:01.758724] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:31.216 [2024-12-03 10:52:01.758732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:31.216 [2024-12-03 10:52:01.758740] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:31.216 [2024-12-03 10:52:01.758750] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:31.216 [2024-12-03 10:52:01.758758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:31.216 [2024-12-03 10:52:01.758766] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:31.216 [2024-12-03 10:52:01.758773] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:31.216 [2024-12-03 10:52:01.758780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:31.216 [2024-12-03 10:52:01.758787] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:31.216 [2024-12-03 10:52:01.758795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:31.216 [2024-12-03 10:52:01.758802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:31.216 [2024-12-03 10:52:01.758809] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:31.216 [2024-12-03 10:52:01.758815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:31.216 [2024-12-03 10:52:01.758822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:31.216 [2024-12-03 10:52:01.758836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:31.217 [2024-12-03 10:52:01.758843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:31.217 [2024-12-03 10:52:01.758850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:31.217 [2024-12-03 10:52:01.758856] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:31.217 [2024-12-03 10:52:01.758863] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:31.217 [2024-12-03 10:52:01.758870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:31.217 [2024-12-03 10:52:01.758877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:31.217 [2024-12-03 10:52:01.758883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:31.217 [2024-12-03 10:52:01.758890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:31.217 [2024-12-03 10:52:01.758897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:31.217 [2024-12-03 10:52:01.758903] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:31.217 [2024-12-03 10:52:01.758910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:31.217 [2024-12-03 10:52:01.758917] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:31.217 [2024-12-03 10:52:01.758923] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:31.217 [2024-12-03 10:52:01.758929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:31.217 [2024-12-03 10:52:01.758936] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:31.217 [2024-12-03 10:52:01.758942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:31.217 [2024-12-03 10:52:01.758949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:31.217 [2024-12-03 10:52:01.758955] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:31.217 [2024-12-03 10:52:01.758962] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:31.217 [2024-12-03 10:52:01.758969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:31.217 [2024-12-03 10:52:01.758976] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:31.217 [2024-12-03 10:52:01.758983] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:31.217 [2024-12-03 10:52:01.758990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:31.217 [2024-12-03 10:52:01.758996] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:31.217 [2024-12-03 10:52:01.759006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:31.217 [2024-12-03 10:52:01.759014] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:31.217 [2024-12-03 10:52:01.759023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:31.217 [2024-12-03 10:52:01.759031] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:31.217 [2024-12-03 10:52:01.759038] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:31.217 [2024-12-03 10:52:01.759045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:31.217 [2024-12-03 10:52:01.759074] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:31.217 [2024-12-03 10:52:01.759082] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:31.217 [2024-12-03 10:52:01.759089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:31.217 [2024-12-03 10:52:01.759097] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:31.217 [2024-12-03 10:52:01.759106] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:31.217 [2024-12-03 10:52:01.759116] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:31.217 [2024-12-03 10:52:01.759123] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:31.217 [2024-12-03 10:52:01.759131] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:31.217 [2024-12-03 10:52:01.759139] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:31.217 [2024-12-03 10:52:01.759146] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:31.217 [2024-12-03 10:52:01.759154] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:31.217 [2024-12-03 10:52:01.759162] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:31.217 [2024-12-03 10:52:01.759169] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:31.217 [2024-12-03 10:52:01.759176] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:31.217 [2024-12-03 10:52:01.759184] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:31.217 [2024-12-03 10:52:01.759192] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:31.217 [2024-12-03 10:52:01.759201] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:31.217 [2024-12-03 10:52:01.759209] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:31.217 [2024-12-03 10:52:01.759229] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:31.217 [2024-12-03 10:52:01.759237] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:31.217 [2024-12-03 10:52:01.759246] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:31.217 [2024-12-03 10:52:01.759254] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:31.217 [2024-12-03 10:52:01.759262] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:31.217 [2024-12-03 10:52:01.759269] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:31.217 [2024-12-03 10:52:01.759277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.217 [2024-12-03 10:52:01.759297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:31.217 [2024-12-03 10:52:01.759305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:24:31.217 [2024-12-03 10:52:01.759312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.217 [2024-12-03 10:52:01.777968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.217 [2024-12-03 10:52:01.778010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:31.217 [2024-12-03 10:52:01.778022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.613 ms 00:24:31.217 [2024-12-03 10:52:01.778037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.217 [2024-12-03 10:52:01.778148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.217 [2024-12-03 10:52:01.778158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:31.217 [2024-12-03 10:52:01.778168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:31.217 [2024-12-03 10:52:01.778177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.217 [2024-12-03 10:52:01.821973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.217 [2024-12-03 10:52:01.822021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:31.217 [2024-12-03 10:52:01.822035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.744 ms 00:24:31.217 [2024-12-03 10:52:01.822044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.217 [2024-12-03 10:52:01.822104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.217 [2024-12-03 10:52:01.822115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:31.217 [2024-12-03 10:52:01.822124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:31.217 [2024-12-03 10:52:01.822131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.217 [2024-12-03 10:52:01.822702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.217 [2024-12-03 10:52:01.822735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:31.217 [2024-12-03 10:52:01.822746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:24:31.217 [2024-12-03 10:52:01.822760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.217 [2024-12-03 10:52:01.822885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.217 [2024-12-03 10:52:01.822894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:31.217 [2024-12-03 10:52:01.822903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:31.217 [2024-12-03 10:52:01.822910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:01.839368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:01.839405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:31.478 [2024-12-03 10:52:01.839416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.432 ms 00:24:31.478 [2024-12-03 10:52:01.839424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:01.853595] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:31.478 [2024-12-03 10:52:01.853636] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:31.478 [2024-12-03 10:52:01.853648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:01.853656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:31.478 [2024-12-03 10:52:01.853665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.117 ms 00:24:31.478 [2024-12-03 10:52:01.853673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:01.879954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:01.879995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:31.478 [2024-12-03 10:52:01.880009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.226 ms 00:24:31.478 [2024-12-03 10:52:01.880017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:01.893194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:01.893231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:31.478 [2024-12-03 10:52:01.893243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.113 ms 00:24:31.478 [2024-12-03 10:52:01.893250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:01.905716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:01.905755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:31.478 [2024-12-03 10:52:01.905776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.420 ms 00:24:31.478 [2024-12-03 10:52:01.905783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:01.906194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:01.906215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:31.478 [2024-12-03 10:52:01.906226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:24:31.478 [2024-12-03 10:52:01.906234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:01.972343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:01.972395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:31.478 [2024-12-03 10:52:01.972410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.091 ms 00:24:31.478 [2024-12-03 10:52:01.972419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:01.983825] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:31.478 [2024-12-03 10:52:01.986902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:01.986938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:31.478 [2024-12-03 10:52:01.986949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.427 ms 00:24:31.478 [2024-12-03 10:52:01.986963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:01.987032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:01.987043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:31.478 [2024-12-03 10:52:01.987068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:31.478 [2024-12-03 10:52:01.987077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:01.988488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:01.988526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:31.478 [2024-12-03 10:52:01.988536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.373 ms 00:24:31.478 [2024-12-03 10:52:01.988544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:01.989869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:01.989905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:31.478 [2024-12-03 10:52:01.989915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.295 ms 00:24:31.478 [2024-12-03 10:52:01.989922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:01.989956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:01.989965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:31.478 [2024-12-03 10:52:01.989980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:31.478 [2024-12-03 10:52:01.989988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:01.990025] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:31.478 [2024-12-03 10:52:01.990036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:01.990048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:31.478 [2024-12-03 10:52:01.990082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:31.478 [2024-12-03 10:52:01.990089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:02.016004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:02.016044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:31.478 [2024-12-03 10:52:02.016066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.894 ms 00:24:31.478 [2024-12-03 10:52:02.016076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:02.016166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.478 [2024-12-03 10:52:02.016176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:31.478 [2024-12-03 10:52:02.016185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:31.478 [2024-12-03 10:52:02.016193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.478 [2024-12-03 10:52:02.024306] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 293.573 ms, result 0 00:24:32.860  [2024-12-03T10:52:04.415Z] Copying: 1228/1048576 [kB] (1228 kBps) [2024-12-03T10:52:05.355Z] Copying: 4860/1048576 [kB] (3632 kBps) [2024-12-03T10:52:06.298Z] Copying: 26/1024 [MB] (21 MBps) [2024-12-03T10:52:07.245Z] Copying: 59/1024 [MB] (32 MBps) [2024-12-03T10:52:08.630Z] Copying: 86/1024 [MB] (27 MBps) [2024-12-03T10:52:09.202Z] Copying: 120/1024 [MB] (34 MBps) [2024-12-03T10:52:10.590Z] Copying: 147/1024 [MB] (26 MBps) [2024-12-03T10:52:11.532Z] Copying: 173/1024 [MB] (26 MBps) [2024-12-03T10:52:12.477Z] Copying: 188/1024 [MB] (15 MBps) [2024-12-03T10:52:13.421Z] Copying: 219/1024 [MB] (31 MBps) [2024-12-03T10:52:14.366Z] Copying: 261/1024 [MB] (41 MBps) [2024-12-03T10:52:15.309Z] Copying: 291/1024 [MB] (30 MBps) [2024-12-03T10:52:16.250Z] Copying: 320/1024 [MB] (29 MBps) [2024-12-03T10:52:17.636Z] Copying: 349/1024 [MB] (28 MBps) [2024-12-03T10:52:18.208Z] Copying: 377/1024 [MB] (28 MBps) [2024-12-03T10:52:19.595Z] Copying: 394/1024 [MB] (16 MBps) [2024-12-03T10:52:20.539Z] Copying: 409/1024 [MB] (15 MBps) [2024-12-03T10:52:21.481Z] Copying: 434/1024 [MB] (24 MBps) [2024-12-03T10:52:22.425Z] Copying: 459/1024 [MB] (25 MBps) [2024-12-03T10:52:23.369Z] Copying: 477/1024 [MB] (18 MBps) [2024-12-03T10:52:24.349Z] Copying: 502/1024 [MB] (25 MBps) [2024-12-03T10:52:25.290Z] Copying: 529/1024 [MB] (26 MBps) [2024-12-03T10:52:26.231Z] Copying: 552/1024 [MB] (22 MBps) [2024-12-03T10:52:27.616Z] Copying: 570/1024 [MB] (18 MBps) [2024-12-03T10:52:28.560Z] Copying: 615/1024 [MB] (45 MBps) [2024-12-03T10:52:29.505Z] Copying: 638/1024 [MB] (22 MBps) [2024-12-03T10:52:30.449Z] Copying: 666/1024 [MB] (28 MBps) [2024-12-03T10:52:31.393Z] Copying: 686/1024 [MB] (20 MBps) [2024-12-03T10:52:32.339Z] Copying: 710/1024 [MB] (23 MBps) [2024-12-03T10:52:33.285Z] Copying: 738/1024 [MB] (28 MBps) [2024-12-03T10:52:34.229Z] Copying: 755/1024 [MB] (17 MBps) [2024-12-03T10:52:35.616Z] Copying: 772/1024 [MB] (16 MBps) [2024-12-03T10:52:36.560Z] Copying: 794/1024 [MB] (22 MBps) [2024-12-03T10:52:37.506Z] Copying: 818/1024 [MB] (23 MBps) [2024-12-03T10:52:38.449Z] Copying: 836/1024 [MB] (18 MBps) [2024-12-03T10:52:39.393Z] Copying: 865/1024 [MB] (28 MBps) [2024-12-03T10:52:40.337Z] Copying: 885/1024 [MB] (20 MBps) [2024-12-03T10:52:41.281Z] Copying: 911/1024 [MB] (25 MBps) [2024-12-03T10:52:42.222Z] Copying: 931/1024 [MB] (20 MBps) [2024-12-03T10:52:43.608Z] Copying: 957/1024 [MB] (25 MBps) [2024-12-03T10:52:44.550Z] Copying: 978/1024 [MB] (21 MBps) [2024-12-03T10:52:45.490Z] Copying: 994/1024 [MB] (15 MBps) [2024-12-03T10:52:45.490Z] Copying: 1020/1024 [MB] (25 MBps) [2024-12-03T10:52:46.878Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-03 10:52:46.781160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.265 [2024-12-03 10:52:46.781270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:16.265 [2024-12-03 10:52:46.781444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:16.265 [2024-12-03 10:52:46.781463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.265 [2024-12-03 10:52:46.781508] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:16.265 [2024-12-03 10:52:46.787404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.265 [2024-12-03 10:52:46.787454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:16.265 [2024-12-03 10:52:46.787466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.867 ms 00:25:16.265 [2024-12-03 10:52:46.787475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.265 [2024-12-03 10:52:46.787758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.265 [2024-12-03 10:52:46.787781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:16.265 [2024-12-03 10:52:46.787792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:25:16.265 [2024-12-03 10:52:46.787801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.265 [2024-12-03 10:52:46.803453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.265 [2024-12-03 10:52:46.803506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:16.265 [2024-12-03 10:52:46.803519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.633 ms 00:25:16.265 [2024-12-03 10:52:46.803529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.265 [2024-12-03 10:52:46.809755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.265 [2024-12-03 10:52:46.809809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:16.265 [2024-12-03 10:52:46.809820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.187 ms 00:25:16.265 [2024-12-03 10:52:46.809830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.265 [2024-12-03 10:52:46.836925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.265 [2024-12-03 10:52:46.836977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:16.265 [2024-12-03 10:52:46.836990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.038 ms 00:25:16.265 [2024-12-03 10:52:46.836998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.265 [2024-12-03 10:52:46.853101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.265 [2024-12-03 10:52:46.853151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:16.265 [2024-12-03 10:52:46.853164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.055 ms 00:25:16.265 [2024-12-03 10:52:46.853173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.265 [2024-12-03 10:52:46.863749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.265 [2024-12-03 10:52:46.863798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:16.265 [2024-12-03 10:52:46.863810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.521 ms 00:25:16.265 [2024-12-03 10:52:46.863826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.529 [2024-12-03 10:52:46.890436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.529 [2024-12-03 10:52:46.890485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:16.529 [2024-12-03 10:52:46.890497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.586 ms 00:25:16.529 [2024-12-03 10:52:46.890505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.529 [2024-12-03 10:52:46.916515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.529 [2024-12-03 10:52:46.916564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:16.529 [2024-12-03 10:52:46.916577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.963 ms 00:25:16.529 [2024-12-03 10:52:46.916596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.529 [2024-12-03 10:52:46.941789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.529 [2024-12-03 10:52:46.941839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:16.529 [2024-12-03 10:52:46.941852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.143 ms 00:25:16.529 [2024-12-03 10:52:46.941860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.529 [2024-12-03 10:52:46.966855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.529 [2024-12-03 10:52:46.966903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:16.529 [2024-12-03 10:52:46.966915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.907 ms 00:25:16.529 [2024-12-03 10:52:46.966922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.529 [2024-12-03 10:52:46.966967] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:16.529 [2024-12-03 10:52:46.966984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:16.529 [2024-12-03 10:52:46.966996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:25:16.529 [2024-12-03 10:52:46.967004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:16.529 [2024-12-03 10:52:46.967334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:16.530 [2024-12-03 10:52:46.967848] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:16.530 [2024-12-03 10:52:46.967856] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f590d54a-bc3c-45d1-a063-a9731ee34255 00:25:16.530 [2024-12-03 10:52:46.967865] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:25:16.530 [2024-12-03 10:52:46.967880] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 169152 00:25:16.530 [2024-12-03 10:52:46.967887] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 167168 00:25:16.530 [2024-12-03 10:52:46.967897] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0119 00:25:16.530 [2024-12-03 10:52:46.967904] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:16.530 [2024-12-03 10:52:46.967912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:16.530 [2024-12-03 10:52:46.967921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:16.530 [2024-12-03 10:52:46.967928] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:16.530 [2024-12-03 10:52:46.967941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:16.530 [2024-12-03 10:52:46.967949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.530 [2024-12-03 10:52:46.967958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:16.530 [2024-12-03 10:52:46.967967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:25:16.530 [2024-12-03 10:52:46.967974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.530 [2024-12-03 10:52:46.981574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.530 [2024-12-03 10:52:46.981615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:16.530 [2024-12-03 10:52:46.981627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.564 ms 00:25:16.530 [2024-12-03 10:52:46.981635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.530 [2024-12-03 10:52:46.981865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.530 [2024-12-03 10:52:46.981875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:16.530 [2024-12-03 10:52:46.981884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:25:16.530 [2024-12-03 10:52:46.981898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.530 [2024-12-03 10:52:47.021250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.530 [2024-12-03 10:52:47.021300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:16.531 [2024-12-03 10:52:47.021311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.531 [2024-12-03 10:52:47.021319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.531 [2024-12-03 10:52:47.021378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.531 [2024-12-03 10:52:47.021386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:16.531 [2024-12-03 10:52:47.021395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.531 [2024-12-03 10:52:47.021402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.531 [2024-12-03 10:52:47.021479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.531 [2024-12-03 10:52:47.021493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:16.531 [2024-12-03 10:52:47.021501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.531 [2024-12-03 10:52:47.021509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.531 [2024-12-03 10:52:47.021525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.531 [2024-12-03 10:52:47.021533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:16.531 [2024-12-03 10:52:47.021542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.531 [2024-12-03 10:52:47.021550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.531 [2024-12-03 10:52:47.101968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.531 [2024-12-03 10:52:47.102029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:16.531 [2024-12-03 10:52:47.102043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.531 [2024-12-03 10:52:47.102051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.531 [2024-12-03 10:52:47.134135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.531 [2024-12-03 10:52:47.134184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:16.531 [2024-12-03 10:52:47.134196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.531 [2024-12-03 10:52:47.134204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.531 [2024-12-03 10:52:47.134276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.531 [2024-12-03 10:52:47.134286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:16.531 [2024-12-03 10:52:47.134295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.531 [2024-12-03 10:52:47.134303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.531 [2024-12-03 10:52:47.134346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.531 [2024-12-03 10:52:47.134356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:16.531 [2024-12-03 10:52:47.134367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.531 [2024-12-03 10:52:47.134374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.531 [2024-12-03 10:52:47.134474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.531 [2024-12-03 10:52:47.134488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:16.531 [2024-12-03 10:52:47.134496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.531 [2024-12-03 10:52:47.134504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.531 [2024-12-03 10:52:47.134540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.531 [2024-12-03 10:52:47.134550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:16.531 [2024-12-03 10:52:47.134559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.531 [2024-12-03 10:52:47.134567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.531 [2024-12-03 10:52:47.134609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.531 [2024-12-03 10:52:47.134623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:16.531 [2024-12-03 10:52:47.134632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.531 [2024-12-03 10:52:47.134640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.531 [2024-12-03 10:52:47.134689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.531 [2024-12-03 10:52:47.134699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:16.531 [2024-12-03 10:52:47.134707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.531 [2024-12-03 10:52:47.134715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.531 [2024-12-03 10:52:47.134851] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 353.678 ms, result 0 00:25:17.502 00:25:17.502 00:25:17.502 10:52:48 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:19.418 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:19.418 10:52:49 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:19.418 [2024-12-03 10:52:50.019803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:19.418 [2024-12-03 10:52:50.019892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78301 ] 00:25:19.678 [2024-12-03 10:52:50.161283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.939 [2024-12-03 10:52:50.360699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.202 [2024-12-03 10:52:50.648092] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:20.202 [2024-12-03 10:52:50.648173] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:20.202 [2024-12-03 10:52:50.805129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.202 [2024-12-03 10:52:50.805193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:20.202 [2024-12-03 10:52:50.805209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:20.202 [2024-12-03 10:52:50.805220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.202 [2024-12-03 10:52:50.805274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.202 [2024-12-03 10:52:50.805285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:20.202 [2024-12-03 10:52:50.805294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:20.202 [2024-12-03 10:52:50.805302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.202 [2024-12-03 10:52:50.805323] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:20.202 [2024-12-03 10:52:50.806104] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:20.202 [2024-12-03 10:52:50.806133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.202 [2024-12-03 10:52:50.806143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:20.202 [2024-12-03 10:52:50.806152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:25:20.202 [2024-12-03 10:52:50.806160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.202 [2024-12-03 10:52:50.807914] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:20.464 [2024-12-03 10:52:50.822355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.464 [2024-12-03 10:52:50.822408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:20.464 [2024-12-03 10:52:50.822422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.443 ms 00:25:20.464 [2024-12-03 10:52:50.822430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.464 [2024-12-03 10:52:50.822510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.464 [2024-12-03 10:52:50.822520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:20.464 [2024-12-03 10:52:50.822528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:20.464 [2024-12-03 10:52:50.822536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.464 [2024-12-03 10:52:50.830649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.464 [2024-12-03 10:52:50.830694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:20.464 [2024-12-03 10:52:50.830704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.033 ms 00:25:20.464 [2024-12-03 10:52:50.830712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.464 [2024-12-03 10:52:50.830811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.464 [2024-12-03 10:52:50.830821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:20.464 [2024-12-03 10:52:50.830830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:25:20.464 [2024-12-03 10:52:50.830839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.464 [2024-12-03 10:52:50.830882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.464 [2024-12-03 10:52:50.830894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:20.464 [2024-12-03 10:52:50.830903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:20.464 [2024-12-03 10:52:50.830911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.464 [2024-12-03 10:52:50.830941] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:20.464 [2024-12-03 10:52:50.835179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.464 [2024-12-03 10:52:50.835218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:20.464 [2024-12-03 10:52:50.835230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.251 ms 00:25:20.464 [2024-12-03 10:52:50.835238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.464 [2024-12-03 10:52:50.835277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.464 [2024-12-03 10:52:50.835308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:20.464 [2024-12-03 10:52:50.835318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:20.465 [2024-12-03 10:52:50.835329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.465 [2024-12-03 10:52:50.835379] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:20.465 [2024-12-03 10:52:50.835402] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:20.465 [2024-12-03 10:52:50.835438] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:20.465 [2024-12-03 10:52:50.835455] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:20.465 [2024-12-03 10:52:50.835530] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:20.465 [2024-12-03 10:52:50.835549] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:20.465 [2024-12-03 10:52:50.835562] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:20.465 [2024-12-03 10:52:50.835573] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:20.465 [2024-12-03 10:52:50.835583] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:20.465 [2024-12-03 10:52:50.835591] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:20.465 [2024-12-03 10:52:50.835598] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:20.465 [2024-12-03 10:52:50.835606] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:20.465 [2024-12-03 10:52:50.835614] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:20.465 [2024-12-03 10:52:50.835622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.465 [2024-12-03 10:52:50.835629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:20.465 [2024-12-03 10:52:50.835638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:25:20.465 [2024-12-03 10:52:50.835645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.465 [2024-12-03 10:52:50.835711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.465 [2024-12-03 10:52:50.835721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:20.465 [2024-12-03 10:52:50.835729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:20.465 [2024-12-03 10:52:50.835737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.465 [2024-12-03 10:52:50.835806] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:20.465 [2024-12-03 10:52:50.835817] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:20.465 [2024-12-03 10:52:50.835825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:20.465 [2024-12-03 10:52:50.835834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:20.465 [2024-12-03 10:52:50.835842] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:20.465 [2024-12-03 10:52:50.835849] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:20.465 [2024-12-03 10:52:50.835857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:20.465 [2024-12-03 10:52:50.835863] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:20.465 [2024-12-03 10:52:50.835870] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:20.465 [2024-12-03 10:52:50.835876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:20.465 [2024-12-03 10:52:50.835885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:20.465 [2024-12-03 10:52:50.835892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:20.465 [2024-12-03 10:52:50.835899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:20.465 [2024-12-03 10:52:50.835906] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:20.465 [2024-12-03 10:52:50.835912] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:20.465 [2024-12-03 10:52:50.835919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:20.465 [2024-12-03 10:52:50.835934] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:20.465 [2024-12-03 10:52:50.835941] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:20.465 [2024-12-03 10:52:50.835948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:20.465 [2024-12-03 10:52:50.835954] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:20.465 [2024-12-03 10:52:50.835962] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:20.465 [2024-12-03 10:52:50.835968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:20.465 [2024-12-03 10:52:50.835975] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:20.465 [2024-12-03 10:52:50.835982] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:20.465 [2024-12-03 10:52:50.835990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:20.465 [2024-12-03 10:52:50.835997] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:20.465 [2024-12-03 10:52:50.836004] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:20.465 [2024-12-03 10:52:50.836011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:20.465 [2024-12-03 10:52:50.836019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:20.465 [2024-12-03 10:52:50.836026] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:20.465 [2024-12-03 10:52:50.836033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:20.465 [2024-12-03 10:52:50.836041] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:20.465 [2024-12-03 10:52:50.836077] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:20.465 [2024-12-03 10:52:50.836085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:20.465 [2024-12-03 10:52:50.836092] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:20.465 [2024-12-03 10:52:50.836099] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:20.465 [2024-12-03 10:52:50.836105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:20.465 [2024-12-03 10:52:50.836112] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:20.465 [2024-12-03 10:52:50.836118] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:20.465 [2024-12-03 10:52:50.836125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:20.465 [2024-12-03 10:52:50.836131] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:20.465 [2024-12-03 10:52:50.836142] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:20.465 [2024-12-03 10:52:50.836151] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:20.465 [2024-12-03 10:52:50.836159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:20.465 [2024-12-03 10:52:50.836167] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:20.465 [2024-12-03 10:52:50.836174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:20.465 [2024-12-03 10:52:50.836181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:20.465 [2024-12-03 10:52:50.836188] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:20.465 [2024-12-03 10:52:50.836195] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:20.465 [2024-12-03 10:52:50.836202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:20.465 [2024-12-03 10:52:50.836211] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:20.465 [2024-12-03 10:52:50.836221] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:20.465 [2024-12-03 10:52:50.836229] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:20.465 [2024-12-03 10:52:50.836237] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:20.465 [2024-12-03 10:52:50.836246] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:20.465 [2024-12-03 10:52:50.836253] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:20.465 [2024-12-03 10:52:50.836261] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:20.465 [2024-12-03 10:52:50.836269] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:20.465 [2024-12-03 10:52:50.836277] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:20.465 [2024-12-03 10:52:50.836290] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:20.465 [2024-12-03 10:52:50.836299] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:20.465 [2024-12-03 10:52:50.836309] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:20.465 [2024-12-03 10:52:50.836317] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:20.465 [2024-12-03 10:52:50.836325] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:20.465 [2024-12-03 10:52:50.836335] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:20.465 [2024-12-03 10:52:50.836343] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:20.465 [2024-12-03 10:52:50.836352] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:20.465 [2024-12-03 10:52:50.836360] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:20.465 [2024-12-03 10:52:50.836367] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:20.465 [2024-12-03 10:52:50.836375] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:20.465 [2024-12-03 10:52:50.836383] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:20.465 [2024-12-03 10:52:50.836390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.465 [2024-12-03 10:52:50.836398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:20.465 [2024-12-03 10:52:50.836406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:25:20.465 [2024-12-03 10:52:50.836416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.465 [2024-12-03 10:52:50.854790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:50.854842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:20.466 [2024-12-03 10:52:50.854855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.330 ms 00:25:20.466 [2024-12-03 10:52:50.854870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:50.854966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:50.854974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:20.466 [2024-12-03 10:52:50.854984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:20.466 [2024-12-03 10:52:50.854993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:50.900745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:50.900806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:20.466 [2024-12-03 10:52:50.900819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.698 ms 00:25:20.466 [2024-12-03 10:52:50.900828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:50.900879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:50.900889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:20.466 [2024-12-03 10:52:50.900898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:20.466 [2024-12-03 10:52:50.900907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:50.901511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:50.901547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:20.466 [2024-12-03 10:52:50.901559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:25:20.466 [2024-12-03 10:52:50.901574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:50.901705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:50.901716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:20.466 [2024-12-03 10:52:50.901726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:25:20.466 [2024-12-03 10:52:50.901737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:50.918440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:50.918486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:20.466 [2024-12-03 10:52:50.918498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.677 ms 00:25:20.466 [2024-12-03 10:52:50.918507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:50.933152] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:20.466 [2024-12-03 10:52:50.933202] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:20.466 [2024-12-03 10:52:50.933214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:50.933223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:20.466 [2024-12-03 10:52:50.933233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.596 ms 00:25:20.466 [2024-12-03 10:52:50.933241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:50.959375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:50.959440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:20.466 [2024-12-03 10:52:50.959452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.079 ms 00:25:20.466 [2024-12-03 10:52:50.959462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:50.972878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:50.972924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:20.466 [2024-12-03 10:52:50.972936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.358 ms 00:25:20.466 [2024-12-03 10:52:50.972944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:50.985599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:50.985654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:20.466 [2024-12-03 10:52:50.985666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.608 ms 00:25:20.466 [2024-12-03 10:52:50.985674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:50.986097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:50.986113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:20.466 [2024-12-03 10:52:50.986125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:25:20.466 [2024-12-03 10:52:50.986133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:51.055041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:51.055116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:20.466 [2024-12-03 10:52:51.055132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.887 ms 00:25:20.466 [2024-12-03 10:52:51.055141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:51.066688] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:20.466 [2024-12-03 10:52:51.070297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:51.070343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:20.466 [2024-12-03 10:52:51.070355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.089 ms 00:25:20.466 [2024-12-03 10:52:51.070370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:51.070457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:51.070469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:20.466 [2024-12-03 10:52:51.070479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:20.466 [2024-12-03 10:52:51.070488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:51.071404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:51.071452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:20.466 [2024-12-03 10:52:51.071464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.878 ms 00:25:20.466 [2024-12-03 10:52:51.071474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:51.072851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:51.072891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:20.466 [2024-12-03 10:52:51.072903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:25:20.466 [2024-12-03 10:52:51.072911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:51.072949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:51.072958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:20.466 [2024-12-03 10:52:51.072973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:20.466 [2024-12-03 10:52:51.072981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.466 [2024-12-03 10:52:51.073018] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:20.466 [2024-12-03 10:52:51.073030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.466 [2024-12-03 10:52:51.073042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:20.466 [2024-12-03 10:52:51.073050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:20.466 [2024-12-03 10:52:51.073075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.727 [2024-12-03 10:52:51.099538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.727 [2024-12-03 10:52:51.099591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:20.727 [2024-12-03 10:52:51.099604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.442 ms 00:25:20.727 [2024-12-03 10:52:51.099612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.727 [2024-12-03 10:52:51.099703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.727 [2024-12-03 10:52:51.099713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:20.727 [2024-12-03 10:52:51.099724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:20.727 [2024-12-03 10:52:51.099733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.727 [2024-12-03 10:52:51.101041] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.426 ms, result 0 00:25:21.673  [2024-12-03T10:52:53.693Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-03T10:52:54.635Z] Copying: 35/1024 [MB] (19 MBps) [2024-12-03T10:52:55.578Z] Copying: 56/1024 [MB] (20 MBps) [2024-12-03T10:52:56.522Z] Copying: 71/1024 [MB] (15 MBps) [2024-12-03T10:52:57.467Z] Copying: 86/1024 [MB] (14 MBps) [2024-12-03T10:52:58.412Z] Copying: 96/1024 [MB] (10 MBps) [2024-12-03T10:52:59.355Z] Copying: 107/1024 [MB] (10 MBps) [2024-12-03T10:53:00.297Z] Copying: 117/1024 [MB] (10 MBps) [2024-12-03T10:53:01.683Z] Copying: 128/1024 [MB] (11 MBps) [2024-12-03T10:53:02.628Z] Copying: 140/1024 [MB] (11 MBps) [2024-12-03T10:53:03.571Z] Copying: 151/1024 [MB] (11 MBps) [2024-12-03T10:53:04.514Z] Copying: 173/1024 [MB] (21 MBps) [2024-12-03T10:53:05.459Z] Copying: 185/1024 [MB] (12 MBps) [2024-12-03T10:53:06.402Z] Copying: 203/1024 [MB] (18 MBps) [2024-12-03T10:53:07.347Z] Copying: 222/1024 [MB] (18 MBps) [2024-12-03T10:53:08.291Z] Copying: 244/1024 [MB] (22 MBps) [2024-12-03T10:53:09.679Z] Copying: 259/1024 [MB] (14 MBps) [2024-12-03T10:53:10.619Z] Copying: 278/1024 [MB] (19 MBps) [2024-12-03T10:53:11.558Z] Copying: 304/1024 [MB] (25 MBps) [2024-12-03T10:53:12.496Z] Copying: 323/1024 [MB] (18 MBps) [2024-12-03T10:53:13.437Z] Copying: 338/1024 [MB] (14 MBps) [2024-12-03T10:53:14.379Z] Copying: 352/1024 [MB] (14 MBps) [2024-12-03T10:53:15.320Z] Copying: 368/1024 [MB] (16 MBps) [2024-12-03T10:53:16.705Z] Copying: 387/1024 [MB] (18 MBps) [2024-12-03T10:53:17.649Z] Copying: 407/1024 [MB] (20 MBps) [2024-12-03T10:53:18.593Z] Copying: 429/1024 [MB] (21 MBps) [2024-12-03T10:53:19.536Z] Copying: 451/1024 [MB] (21 MBps) [2024-12-03T10:53:20.480Z] Copying: 470/1024 [MB] (18 MBps) [2024-12-03T10:53:21.425Z] Copying: 490/1024 [MB] (19 MBps) [2024-12-03T10:53:22.427Z] Copying: 512/1024 [MB] (22 MBps) [2024-12-03T10:53:23.370Z] Copying: 533/1024 [MB] (21 MBps) [2024-12-03T10:53:24.314Z] Copying: 557/1024 [MB] (23 MBps) [2024-12-03T10:53:25.703Z] Copying: 577/1024 [MB] (19 MBps) [2024-12-03T10:53:26.645Z] Copying: 594/1024 [MB] (17 MBps) [2024-12-03T10:53:27.590Z] Copying: 609/1024 [MB] (15 MBps) [2024-12-03T10:53:28.537Z] Copying: 622/1024 [MB] (12 MBps) [2024-12-03T10:53:29.482Z] Copying: 632/1024 [MB] (10 MBps) [2024-12-03T10:53:30.427Z] Copying: 645/1024 [MB] (12 MBps) [2024-12-03T10:53:31.370Z] Copying: 657/1024 [MB] (12 MBps) [2024-12-03T10:53:32.315Z] Copying: 678/1024 [MB] (20 MBps) [2024-12-03T10:53:33.702Z] Copying: 690/1024 [MB] (12 MBps) [2024-12-03T10:53:34.648Z] Copying: 708/1024 [MB] (17 MBps) [2024-12-03T10:53:35.594Z] Copying: 721/1024 [MB] (13 MBps) [2024-12-03T10:53:36.534Z] Copying: 732/1024 [MB] (10 MBps) [2024-12-03T10:53:37.477Z] Copying: 742/1024 [MB] (10 MBps) [2024-12-03T10:53:38.421Z] Copying: 756/1024 [MB] (13 MBps) [2024-12-03T10:53:39.366Z] Copying: 773/1024 [MB] (17 MBps) [2024-12-03T10:53:40.313Z] Copying: 788/1024 [MB] (14 MBps) [2024-12-03T10:53:41.701Z] Copying: 802/1024 [MB] (13 MBps) [2024-12-03T10:53:42.646Z] Copying: 822/1024 [MB] (20 MBps) [2024-12-03T10:53:43.588Z] Copying: 843/1024 [MB] (21 MBps) [2024-12-03T10:53:44.532Z] Copying: 867/1024 [MB] (23 MBps) [2024-12-03T10:53:45.476Z] Copying: 887/1024 [MB] (20 MBps) [2024-12-03T10:53:46.421Z] Copying: 904/1024 [MB] (17 MBps) [2024-12-03T10:53:47.366Z] Copying: 916/1024 [MB] (12 MBps) [2024-12-03T10:53:48.311Z] Copying: 936/1024 [MB] (19 MBps) [2024-12-03T10:53:49.702Z] Copying: 953/1024 [MB] (17 MBps) [2024-12-03T10:53:50.647Z] Copying: 976/1024 [MB] (22 MBps) [2024-12-03T10:53:51.651Z] Copying: 995/1024 [MB] (19 MBps) [2024-12-03T10:53:52.225Z] Copying: 1010/1024 [MB] (15 MBps) [2024-12-03T10:53:52.798Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-03 10:53:52.638468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.185 [2024-12-03 10:53:52.638560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:22.185 [2024-12-03 10:53:52.638584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:22.185 [2024-12-03 10:53:52.638600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.185 [2024-12-03 10:53:52.638641] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:22.185 [2024-12-03 10:53:52.641784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.185 [2024-12-03 10:53:52.641835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:22.185 [2024-12-03 10:53:52.641847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.119 ms 00:26:22.185 [2024-12-03 10:53:52.641855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.185 [2024-12-03 10:53:52.642118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.185 [2024-12-03 10:53:52.642130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:22.185 [2024-12-03 10:53:52.642140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:26:22.185 [2024-12-03 10:53:52.642148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.185 [2024-12-03 10:53:52.645611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.185 [2024-12-03 10:53:52.645630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:22.185 [2024-12-03 10:53:52.645644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.450 ms 00:26:22.185 [2024-12-03 10:53:52.645652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.185 [2024-12-03 10:53:52.652423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.185 [2024-12-03 10:53:52.652466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:22.185 [2024-12-03 10:53:52.652476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.735 ms 00:26:22.185 [2024-12-03 10:53:52.652485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.185 [2024-12-03 10:53:52.681189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.185 [2024-12-03 10:53:52.681238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:22.185 [2024-12-03 10:53:52.681251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.627 ms 00:26:22.185 [2024-12-03 10:53:52.681259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.185 [2024-12-03 10:53:52.698548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.185 [2024-12-03 10:53:52.698591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:22.185 [2024-12-03 10:53:52.698605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.236 ms 00:26:22.185 [2024-12-03 10:53:52.698621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.185 [2024-12-03 10:53:52.709372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.185 [2024-12-03 10:53:52.709419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:22.185 [2024-12-03 10:53:52.709431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.693 ms 00:26:22.185 [2024-12-03 10:53:52.709439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.185 [2024-12-03 10:53:52.736728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.185 [2024-12-03 10:53:52.736774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:22.185 [2024-12-03 10:53:52.736786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.271 ms 00:26:22.185 [2024-12-03 10:53:52.736795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.185 [2024-12-03 10:53:52.763108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.185 [2024-12-03 10:53:52.763151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:22.185 [2024-12-03 10:53:52.763177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.262 ms 00:26:22.185 [2024-12-03 10:53:52.763185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.185 [2024-12-03 10:53:52.789043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.185 [2024-12-03 10:53:52.789097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:22.185 [2024-12-03 10:53:52.789110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.807 ms 00:26:22.185 [2024-12-03 10:53:52.789118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.448 [2024-12-03 10:53:52.815261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.448 [2024-12-03 10:53:52.815315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:22.448 [2024-12-03 10:53:52.815327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.036 ms 00:26:22.448 [2024-12-03 10:53:52.815334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.448 [2024-12-03 10:53:52.815382] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:22.448 [2024-12-03 10:53:52.815407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:22.448 [2024-12-03 10:53:52.815419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:26:22.448 [2024-12-03 10:53:52.815428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:22.448 [2024-12-03 10:53:52.815437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:22.448 [2024-12-03 10:53:52.815445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:22.448 [2024-12-03 10:53:52.815455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:22.448 [2024-12-03 10:53:52.815463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:22.448 [2024-12-03 10:53:52.815471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.815998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:22.449 [2024-12-03 10:53:52.816201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:22.450 [2024-12-03 10:53:52.816209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:22.450 [2024-12-03 10:53:52.816219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:22.450 [2024-12-03 10:53:52.816226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:22.450 [2024-12-03 10:53:52.816235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:22.450 [2024-12-03 10:53:52.816251] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:22.450 [2024-12-03 10:53:52.816260] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f590d54a-bc3c-45d1-a063-a9731ee34255 00:26:22.450 [2024-12-03 10:53:52.816268] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:26:22.450 [2024-12-03 10:53:52.816276] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:22.450 [2024-12-03 10:53:52.816284] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:22.450 [2024-12-03 10:53:52.816292] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:22.450 [2024-12-03 10:53:52.816299] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:22.450 [2024-12-03 10:53:52.816307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:22.450 [2024-12-03 10:53:52.816315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:22.450 [2024-12-03 10:53:52.816330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:22.450 [2024-12-03 10:53:52.816336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:22.450 [2024-12-03 10:53:52.816344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.450 [2024-12-03 10:53:52.816366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:22.450 [2024-12-03 10:53:52.816378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:26:22.450 [2024-12-03 10:53:52.816386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.830129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.450 [2024-12-03 10:53:52.830170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:22.450 [2024-12-03 10:53:52.830182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.694 ms 00:26:22.450 [2024-12-03 10:53:52.830191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.830418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.450 [2024-12-03 10:53:52.830427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:22.450 [2024-12-03 10:53:52.830436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:26:22.450 [2024-12-03 10:53:52.830444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.869830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.450 [2024-12-03 10:53:52.869876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:22.450 [2024-12-03 10:53:52.869887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.450 [2024-12-03 10:53:52.869896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.869966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.450 [2024-12-03 10:53:52.869975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:22.450 [2024-12-03 10:53:52.869984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.450 [2024-12-03 10:53:52.869992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.870100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.450 [2024-12-03 10:53:52.870112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:22.450 [2024-12-03 10:53:52.870121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.450 [2024-12-03 10:53:52.870129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.870146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.450 [2024-12-03 10:53:52.870160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:22.450 [2024-12-03 10:53:52.870169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.450 [2024-12-03 10:53:52.870177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.951965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.450 [2024-12-03 10:53:52.952011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:22.450 [2024-12-03 10:53:52.952024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.450 [2024-12-03 10:53:52.952033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.984342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.450 [2024-12-03 10:53:52.984405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:22.450 [2024-12-03 10:53:52.984418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.450 [2024-12-03 10:53:52.984426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.984492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.450 [2024-12-03 10:53:52.984502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:22.450 [2024-12-03 10:53:52.984511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.450 [2024-12-03 10:53:52.984520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.984562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.450 [2024-12-03 10:53:52.984572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:22.450 [2024-12-03 10:53:52.984584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.450 [2024-12-03 10:53:52.984593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.984693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.450 [2024-12-03 10:53:52.984704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:22.450 [2024-12-03 10:53:52.984712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.450 [2024-12-03 10:53:52.984720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.984751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.450 [2024-12-03 10:53:52.984760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:22.450 [2024-12-03 10:53:52.984769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.450 [2024-12-03 10:53:52.984780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.984821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.450 [2024-12-03 10:53:52.984830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:22.450 [2024-12-03 10:53:52.984838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.450 [2024-12-03 10:53:52.984847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.984894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.450 [2024-12-03 10:53:52.984903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:22.450 [2024-12-03 10:53:52.984915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.450 [2024-12-03 10:53:52.984923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.450 [2024-12-03 10:53:52.985123] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.570 ms, result 0 00:26:23.393 00:26:23.393 00:26:23.393 10:53:53 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:25.935 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:25.935 10:53:56 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:25.935 10:53:56 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:25.935 10:53:56 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:25.935 10:53:56 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:25.935 10:53:56 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:25.935 10:53:56 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:25.935 10:53:56 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:25.935 Process with pid 76123 is not found 00:26:25.935 10:53:56 -- ftl/dirty_shutdown.sh@37 -- # killprocess 76123 00:26:25.935 10:53:56 -- common/autotest_common.sh@936 -- # '[' -z 76123 ']' 00:26:25.935 10:53:56 -- common/autotest_common.sh@940 -- # kill -0 76123 00:26:25.935 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (76123) - No such process 00:26:25.935 10:53:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 76123 is not found' 00:26:25.935 10:53:56 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:26.194 10:53:56 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:26.194 Remove shared memory files 00:26:26.194 10:53:56 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:26.194 10:53:56 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:26.194 10:53:56 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:26.194 10:53:56 -- ftl/common.sh@207 -- # rm -f rm -f 00:26:26.194 10:53:56 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:26.194 10:53:56 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:26.194 00:26:26.194 real 4m28.038s 00:26:26.194 user 4m53.990s 00:26:26.194 sys 0m28.875s 00:26:26.194 10:53:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:26.194 ************************************ 00:26:26.194 END TEST ftl_dirty_shutdown 00:26:26.194 ************************************ 00:26:26.194 10:53:56 -- common/autotest_common.sh@10 -- # set +x 00:26:26.194 10:53:56 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:26.194 10:53:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:26.194 10:53:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:26.194 10:53:56 -- common/autotest_common.sh@10 -- # set +x 00:26:26.194 ************************************ 00:26:26.194 START TEST ftl_upgrade_shutdown 00:26:26.194 ************************************ 00:26:26.194 10:53:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:26.194 * Looking for test storage... 00:26:26.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:26.194 10:53:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:26.194 10:53:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:26.194 10:53:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:26.454 10:53:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:26.454 10:53:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:26.454 10:53:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:26.454 10:53:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:26.454 10:53:56 -- scripts/common.sh@335 -- # IFS=.-: 00:26:26.454 10:53:56 -- scripts/common.sh@335 -- # read -ra ver1 00:26:26.454 10:53:56 -- scripts/common.sh@336 -- # IFS=.-: 00:26:26.454 10:53:56 -- scripts/common.sh@336 -- # read -ra ver2 00:26:26.454 10:53:56 -- scripts/common.sh@337 -- # local 'op=<' 00:26:26.454 10:53:56 -- scripts/common.sh@339 -- # ver1_l=2 00:26:26.454 10:53:56 -- scripts/common.sh@340 -- # ver2_l=1 00:26:26.454 10:53:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:26.454 10:53:56 -- scripts/common.sh@343 -- # case "$op" in 00:26:26.454 10:53:56 -- scripts/common.sh@344 -- # : 1 00:26:26.454 10:53:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:26.454 10:53:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:26.454 10:53:56 -- scripts/common.sh@364 -- # decimal 1 00:26:26.454 10:53:56 -- scripts/common.sh@352 -- # local d=1 00:26:26.454 10:53:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:26.454 10:53:56 -- scripts/common.sh@354 -- # echo 1 00:26:26.454 10:53:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:26.454 10:53:56 -- scripts/common.sh@365 -- # decimal 2 00:26:26.454 10:53:56 -- scripts/common.sh@352 -- # local d=2 00:26:26.454 10:53:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:26.454 10:53:56 -- scripts/common.sh@354 -- # echo 2 00:26:26.454 10:53:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:26.454 10:53:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:26.454 10:53:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:26.454 10:53:56 -- scripts/common.sh@367 -- # return 0 00:26:26.454 10:53:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:26.454 10:53:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:26.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.454 --rc genhtml_branch_coverage=1 00:26:26.454 --rc genhtml_function_coverage=1 00:26:26.454 --rc genhtml_legend=1 00:26:26.454 --rc geninfo_all_blocks=1 00:26:26.454 --rc geninfo_unexecuted_blocks=1 00:26:26.454 00:26:26.454 ' 00:26:26.454 10:53:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:26.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.454 --rc genhtml_branch_coverage=1 00:26:26.454 --rc genhtml_function_coverage=1 00:26:26.454 --rc genhtml_legend=1 00:26:26.454 --rc geninfo_all_blocks=1 00:26:26.454 --rc geninfo_unexecuted_blocks=1 00:26:26.454 00:26:26.454 ' 00:26:26.454 10:53:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:26.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.454 --rc genhtml_branch_coverage=1 00:26:26.454 --rc genhtml_function_coverage=1 00:26:26.454 --rc genhtml_legend=1 00:26:26.454 --rc geninfo_all_blocks=1 00:26:26.454 --rc geninfo_unexecuted_blocks=1 00:26:26.454 00:26:26.454 ' 00:26:26.454 10:53:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:26.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.454 --rc genhtml_branch_coverage=1 00:26:26.454 --rc genhtml_function_coverage=1 00:26:26.454 --rc genhtml_legend=1 00:26:26.454 --rc geninfo_all_blocks=1 00:26:26.454 --rc geninfo_unexecuted_blocks=1 00:26:26.454 00:26:26.454 ' 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:26.454 10:53:56 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:26.454 10:53:56 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:26.454 10:53:56 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:26.454 10:53:56 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:26.454 10:53:56 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:26.454 10:53:56 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:26.454 10:53:56 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:26.454 10:53:56 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:26.454 10:53:56 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:26.454 10:53:56 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:26.454 10:53:56 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:26.454 10:53:56 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:26.454 10:53:56 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:26.454 10:53:56 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:26.454 10:53:56 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:26.454 10:53:56 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:26.454 10:53:56 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:26.454 10:53:56 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:26.454 10:53:56 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:26.454 10:53:56 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:26.454 10:53:56 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:26.454 10:53:56 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:26.454 10:53:56 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:26.454 10:53:56 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:26.454 10:53:56 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:26.454 10:53:56 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:26.454 10:53:56 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:26.454 10:53:56 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:26.454 10:53:56 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:26.454 10:53:56 -- ftl/common.sh@81 -- # local base_bdev= 00:26:26.454 10:53:56 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:26.454 10:53:56 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:26.454 10:53:56 -- ftl/common.sh@89 -- # spdk_tgt_pid=79053 00:26:26.454 10:53:56 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:26.454 10:53:56 -- ftl/common.sh@91 -- # waitforlisten 79053 00:26:26.454 10:53:56 -- common/autotest_common.sh@829 -- # '[' -z 79053 ']' 00:26:26.454 10:53:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.454 10:53:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:26.454 10:53:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.454 10:53:56 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:26.454 10:53:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:26.454 10:53:56 -- common/autotest_common.sh@10 -- # set +x 00:26:26.454 [2024-12-03 10:53:56.927025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:26.454 [2024-12-03 10:53:56.927162] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79053 ] 00:26:26.782 [2024-12-03 10:53:57.078515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.782 [2024-12-03 10:53:57.305123] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:26.782 [2024-12-03 10:53:57.305379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.165 10:53:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:28.165 10:53:58 -- common/autotest_common.sh@862 -- # return 0 00:26:28.165 10:53:58 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:28.165 10:53:58 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:28.165 10:53:58 -- ftl/common.sh@99 -- # local params 00:26:28.165 10:53:58 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:28.165 10:53:58 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:28.165 10:53:58 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:28.165 10:53:58 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:26:28.165 10:53:58 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:28.165 10:53:58 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:28.165 10:53:58 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:28.165 10:53:58 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:26:28.165 10:53:58 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:28.165 10:53:58 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:28.165 10:53:58 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:28.165 10:53:58 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:28.165 10:53:58 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:26:28.165 10:53:58 -- ftl/common.sh@54 -- # local name=base 00:26:28.165 10:53:58 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:26:28.165 10:53:58 -- ftl/common.sh@56 -- # local size=20480 00:26:28.165 10:53:58 -- ftl/common.sh@59 -- # local base_bdev 00:26:28.165 10:53:58 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:26:28.165 10:53:58 -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:28.165 10:53:58 -- ftl/common.sh@62 -- # local base_size 00:26:28.165 10:53:58 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:28.165 10:53:58 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:26:28.165 10:53:58 -- common/autotest_common.sh@1368 -- # local bdev_info 00:26:28.165 10:53:58 -- common/autotest_common.sh@1369 -- # local bs 00:26:28.165 10:53:58 -- common/autotest_common.sh@1370 -- # local nb 00:26:28.165 10:53:58 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:28.425 10:53:58 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:26:28.425 { 00:26:28.425 "name": "basen1", 00:26:28.425 "aliases": [ 00:26:28.425 "53caad9b-39eb-4a85-b9cb-d67195342590" 00:26:28.425 ], 00:26:28.426 "product_name": "NVMe disk", 00:26:28.426 "block_size": 4096, 00:26:28.426 "num_blocks": 1310720, 00:26:28.426 "uuid": "53caad9b-39eb-4a85-b9cb-d67195342590", 00:26:28.426 "assigned_rate_limits": { 00:26:28.426 "rw_ios_per_sec": 0, 00:26:28.426 "rw_mbytes_per_sec": 0, 00:26:28.426 "r_mbytes_per_sec": 0, 00:26:28.426 "w_mbytes_per_sec": 0 00:26:28.426 }, 00:26:28.426 "claimed": true, 00:26:28.426 "claim_type": "read_many_write_one", 00:26:28.426 "zoned": false, 00:26:28.426 "supported_io_types": { 00:26:28.426 "read": true, 00:26:28.426 "write": true, 00:26:28.426 "unmap": true, 00:26:28.426 "write_zeroes": true, 00:26:28.426 "flush": true, 00:26:28.426 "reset": true, 00:26:28.426 "compare": true, 00:26:28.426 "compare_and_write": false, 00:26:28.426 "abort": true, 00:26:28.426 "nvme_admin": true, 00:26:28.426 "nvme_io": true 00:26:28.426 }, 00:26:28.426 "driver_specific": { 00:26:28.426 "nvme": [ 00:26:28.426 { 00:26:28.426 "pci_address": "0000:00:07.0", 00:26:28.426 "trid": { 00:26:28.426 "trtype": "PCIe", 00:26:28.426 "traddr": "0000:00:07.0" 00:26:28.426 }, 00:26:28.426 "ctrlr_data": { 00:26:28.426 "cntlid": 0, 00:26:28.426 "vendor_id": "0x1b36", 00:26:28.426 "model_number": "QEMU NVMe Ctrl", 00:26:28.426 "serial_number": "12341", 00:26:28.426 "firmware_revision": "8.0.0", 00:26:28.426 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:28.426 "oacs": { 00:26:28.426 "security": 0, 00:26:28.426 "format": 1, 00:26:28.426 "firmware": 0, 00:26:28.426 "ns_manage": 1 00:26:28.426 }, 00:26:28.426 "multi_ctrlr": false, 00:26:28.426 "ana_reporting": false 00:26:28.426 }, 00:26:28.426 "vs": { 00:26:28.426 "nvme_version": "1.4" 00:26:28.426 }, 00:26:28.426 "ns_data": { 00:26:28.426 "id": 1, 00:26:28.426 "can_share": false 00:26:28.426 } 00:26:28.426 } 00:26:28.426 ], 00:26:28.426 "mp_policy": "active_passive" 00:26:28.426 } 00:26:28.426 } 00:26:28.426 ]' 00:26:28.426 10:53:58 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:26:28.426 10:53:58 -- common/autotest_common.sh@1372 -- # bs=4096 00:26:28.426 10:53:58 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:26:28.426 10:53:58 -- common/autotest_common.sh@1373 -- # nb=1310720 00:26:28.426 10:53:58 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:26:28.426 10:53:58 -- common/autotest_common.sh@1377 -- # echo 5120 00:26:28.426 10:53:58 -- ftl/common.sh@63 -- # base_size=5120 00:26:28.426 10:53:58 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:28.426 10:53:58 -- ftl/common.sh@67 -- # clear_lvols 00:26:28.426 10:53:58 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:28.426 10:53:58 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:28.686 10:53:59 -- ftl/common.sh@28 -- # stores=eeef358e-a64f-4b71-a34f-24b470a9def2 00:26:28.686 10:53:59 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:28.686 10:53:59 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eeef358e-a64f-4b71-a34f-24b470a9def2 00:26:28.947 10:53:59 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:28.947 10:53:59 -- ftl/common.sh@68 -- # lvs=bbd4e6b4-90b7-46c8-8ce1-61caf8a889ce 00:26:28.947 10:53:59 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u bbd4e6b4-90b7-46c8-8ce1-61caf8a889ce 00:26:29.208 10:53:59 -- ftl/common.sh@107 -- # base_bdev=999b39a5-2a8e-4547-99ba-5a020c171337 00:26:29.208 10:53:59 -- ftl/common.sh@108 -- # [[ -z 999b39a5-2a8e-4547-99ba-5a020c171337 ]] 00:26:29.208 10:53:59 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 999b39a5-2a8e-4547-99ba-5a020c171337 5120 00:26:29.208 10:53:59 -- ftl/common.sh@35 -- # local name=cache 00:26:29.208 10:53:59 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:26:29.208 10:53:59 -- ftl/common.sh@37 -- # local base_bdev=999b39a5-2a8e-4547-99ba-5a020c171337 00:26:29.208 10:53:59 -- ftl/common.sh@38 -- # local cache_size=5120 00:26:29.208 10:53:59 -- ftl/common.sh@41 -- # get_bdev_size 999b39a5-2a8e-4547-99ba-5a020c171337 00:26:29.208 10:53:59 -- common/autotest_common.sh@1367 -- # local bdev_name=999b39a5-2a8e-4547-99ba-5a020c171337 00:26:29.208 10:53:59 -- common/autotest_common.sh@1368 -- # local bdev_info 00:26:29.208 10:53:59 -- common/autotest_common.sh@1369 -- # local bs 00:26:29.208 10:53:59 -- common/autotest_common.sh@1370 -- # local nb 00:26:29.208 10:53:59 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 999b39a5-2a8e-4547-99ba-5a020c171337 00:26:29.469 10:53:59 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:26:29.469 { 00:26:29.469 "name": "999b39a5-2a8e-4547-99ba-5a020c171337", 00:26:29.469 "aliases": [ 00:26:29.469 "lvs/basen1p0" 00:26:29.469 ], 00:26:29.469 "product_name": "Logical Volume", 00:26:29.469 "block_size": 4096, 00:26:29.469 "num_blocks": 5242880, 00:26:29.469 "uuid": "999b39a5-2a8e-4547-99ba-5a020c171337", 00:26:29.469 "assigned_rate_limits": { 00:26:29.469 "rw_ios_per_sec": 0, 00:26:29.469 "rw_mbytes_per_sec": 0, 00:26:29.469 "r_mbytes_per_sec": 0, 00:26:29.469 "w_mbytes_per_sec": 0 00:26:29.469 }, 00:26:29.469 "claimed": false, 00:26:29.469 "zoned": false, 00:26:29.469 "supported_io_types": { 00:26:29.469 "read": true, 00:26:29.469 "write": true, 00:26:29.469 "unmap": true, 00:26:29.469 "write_zeroes": true, 00:26:29.469 "flush": false, 00:26:29.469 "reset": true, 00:26:29.469 "compare": false, 00:26:29.469 "compare_and_write": false, 00:26:29.469 "abort": false, 00:26:29.469 "nvme_admin": false, 00:26:29.469 "nvme_io": false 00:26:29.469 }, 00:26:29.469 "driver_specific": { 00:26:29.469 "lvol": { 00:26:29.469 "lvol_store_uuid": "bbd4e6b4-90b7-46c8-8ce1-61caf8a889ce", 00:26:29.469 "base_bdev": "basen1", 00:26:29.469 "thin_provision": true, 00:26:29.469 "snapshot": false, 00:26:29.469 "clone": false, 00:26:29.469 "esnap_clone": false 00:26:29.469 } 00:26:29.469 } 00:26:29.469 } 00:26:29.469 ]' 00:26:29.469 10:53:59 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:26:29.469 10:53:59 -- common/autotest_common.sh@1372 -- # bs=4096 00:26:29.469 10:53:59 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:26:29.469 10:53:59 -- common/autotest_common.sh@1373 -- # nb=5242880 00:26:29.469 10:53:59 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:26:29.469 10:53:59 -- common/autotest_common.sh@1377 -- # echo 20480 00:26:29.470 10:53:59 -- ftl/common.sh@41 -- # local base_size=1024 00:26:29.470 10:53:59 -- ftl/common.sh@44 -- # local nvc_bdev 00:26:29.470 10:53:59 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:26:29.731 10:54:00 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:29.731 10:54:00 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:29.731 10:54:00 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:29.993 10:54:00 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:29.993 10:54:00 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:29.994 10:54:00 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 999b39a5-2a8e-4547-99ba-5a020c171337 -c cachen1p0 --l2p_dram_limit 2 00:26:29.994 [2024-12-03 10:54:00.538965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.994 [2024-12-03 10:54:00.539006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:29.994 [2024-12-03 10:54:00.539018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:29.994 [2024-12-03 10:54:00.539026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.994 [2024-12-03 10:54:00.539080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.994 [2024-12-03 10:54:00.539088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:29.994 [2024-12-03 10:54:00.539096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:26:29.994 [2024-12-03 10:54:00.539102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.994 [2024-12-03 10:54:00.539119] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:29.994 [2024-12-03 10:54:00.539733] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:29.994 [2024-12-03 10:54:00.539755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.994 [2024-12-03 10:54:00.539762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:29.994 [2024-12-03 10:54:00.539771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.638 ms 00:26:29.994 [2024-12-03 10:54:00.539777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.994 [2024-12-03 10:54:00.539830] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 1527cf0e-44c7-4fd4-a655-126b687d8e76 00:26:29.994 [2024-12-03 10:54:00.540804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.994 [2024-12-03 10:54:00.540830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:29.994 [2024-12-03 10:54:00.540837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:29.994 [2024-12-03 10:54:00.540844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.994 [2024-12-03 10:54:00.545837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.994 [2024-12-03 10:54:00.545865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:29.994 [2024-12-03 10:54:00.545873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.957 ms 00:26:29.994 [2024-12-03 10:54:00.545881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.994 [2024-12-03 10:54:00.545912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.994 [2024-12-03 10:54:00.545921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:29.994 [2024-12-03 10:54:00.545928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:29.994 [2024-12-03 10:54:00.545937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.994 [2024-12-03 10:54:00.545972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.994 [2024-12-03 10:54:00.545984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:29.994 [2024-12-03 10:54:00.545990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:29.994 [2024-12-03 10:54:00.545997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.994 [2024-12-03 10:54:00.546016] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:29.994 [2024-12-03 10:54:00.549060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.994 [2024-12-03 10:54:00.549083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:29.994 [2024-12-03 10:54:00.549093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.040 ms 00:26:29.994 [2024-12-03 10:54:00.549099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.994 [2024-12-03 10:54:00.549124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.994 [2024-12-03 10:54:00.549131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:29.994 [2024-12-03 10:54:00.549139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:29.994 [2024-12-03 10:54:00.549145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.994 [2024-12-03 10:54:00.549160] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:29.994 [2024-12-03 10:54:00.549246] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:29.994 [2024-12-03 10:54:00.549258] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:29.994 [2024-12-03 10:54:00.549266] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:29.994 [2024-12-03 10:54:00.549276] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:29.994 [2024-12-03 10:54:00.549283] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:29.994 [2024-12-03 10:54:00.549292] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:29.994 [2024-12-03 10:54:00.549298] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:29.994 [2024-12-03 10:54:00.549305] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:29.994 [2024-12-03 10:54:00.549311] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:29.994 [2024-12-03 10:54:00.549318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.994 [2024-12-03 10:54:00.549329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:29.994 [2024-12-03 10:54:00.549336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.160 ms 00:26:29.994 [2024-12-03 10:54:00.549342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.994 [2024-12-03 10:54:00.549392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.994 [2024-12-03 10:54:00.549398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:29.994 [2024-12-03 10:54:00.549405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:29.994 [2024-12-03 10:54:00.549412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.994 [2024-12-03 10:54:00.549470] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:29.994 [2024-12-03 10:54:00.549477] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:29.994 [2024-12-03 10:54:00.549485] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:29.994 [2024-12-03 10:54:00.549491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.994 [2024-12-03 10:54:00.549498] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:29.994 [2024-12-03 10:54:00.549503] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:29.994 [2024-12-03 10:54:00.549510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:29.994 [2024-12-03 10:54:00.549515] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:29.994 [2024-12-03 10:54:00.549522] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:29.994 [2024-12-03 10:54:00.549526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.994 [2024-12-03 10:54:00.549533] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:29.994 [2024-12-03 10:54:00.549538] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:29.994 [2024-12-03 10:54:00.549545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.994 [2024-12-03 10:54:00.549550] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:29.994 [2024-12-03 10:54:00.549556] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:29.994 [2024-12-03 10:54:00.549561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.994 [2024-12-03 10:54:00.549569] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:29.994 [2024-12-03 10:54:00.549574] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:29.994 [2024-12-03 10:54:00.549586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.994 [2024-12-03 10:54:00.549591] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:29.994 [2024-12-03 10:54:00.549597] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:29.994 [2024-12-03 10:54:00.549602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:29.994 [2024-12-03 10:54:00.549609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:29.994 [2024-12-03 10:54:00.549614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:29.994 [2024-12-03 10:54:00.549620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:29.994 [2024-12-03 10:54:00.549625] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:29.994 [2024-12-03 10:54:00.549631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:29.994 [2024-12-03 10:54:00.549636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:29.994 [2024-12-03 10:54:00.549642] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:29.994 [2024-12-03 10:54:00.549649] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:29.994 [2024-12-03 10:54:00.549656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:29.994 [2024-12-03 10:54:00.549661] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:29.994 [2024-12-03 10:54:00.549668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:29.994 [2024-12-03 10:54:00.549673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:29.994 [2024-12-03 10:54:00.549680] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:29.994 [2024-12-03 10:54:00.549685] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:29.994 [2024-12-03 10:54:00.549691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.994 [2024-12-03 10:54:00.549696] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:29.994 [2024-12-03 10:54:00.549703] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:29.994 [2024-12-03 10:54:00.549708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.994 [2024-12-03 10:54:00.549714] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:29.994 [2024-12-03 10:54:00.549720] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:29.994 [2024-12-03 10:54:00.549727] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:29.994 [2024-12-03 10:54:00.549733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.994 [2024-12-03 10:54:00.549741] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:29.994 [2024-12-03 10:54:00.549746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:29.995 [2024-12-03 10:54:00.549752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:29.995 [2024-12-03 10:54:00.549758] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:29.995 [2024-12-03 10:54:00.549765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:29.995 [2024-12-03 10:54:00.549770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:29.995 [2024-12-03 10:54:00.549777] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:29.995 [2024-12-03 10:54:00.549784] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:29.995 [2024-12-03 10:54:00.549792] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:29.995 [2024-12-03 10:54:00.549798] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:29.995 [2024-12-03 10:54:00.549804] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:29.995 [2024-12-03 10:54:00.549810] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:29.995 [2024-12-03 10:54:00.549816] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:29.995 [2024-12-03 10:54:00.549822] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:29.995 [2024-12-03 10:54:00.549828] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:29.995 [2024-12-03 10:54:00.549834] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:29.995 [2024-12-03 10:54:00.549840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:29.995 [2024-12-03 10:54:00.549846] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:29.995 [2024-12-03 10:54:00.549853] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:29.995 [2024-12-03 10:54:00.549859] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:29.995 [2024-12-03 10:54:00.549868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:29.995 [2024-12-03 10:54:00.549874] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:29.995 [2024-12-03 10:54:00.549882] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:29.995 [2024-12-03 10:54:00.549888] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:29.995 [2024-12-03 10:54:00.549895] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:29.995 [2024-12-03 10:54:00.549901] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:29.995 [2024-12-03 10:54:00.549907] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:29.995 [2024-12-03 10:54:00.549913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.995 [2024-12-03 10:54:00.549920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:29.995 [2024-12-03 10:54:00.549925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.479 ms 00:26:29.995 [2024-12-03 10:54:00.549932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.995 [2024-12-03 10:54:00.561832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.995 [2024-12-03 10:54:00.561863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:29.995 [2024-12-03 10:54:00.561871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.858 ms 00:26:29.995 [2024-12-03 10:54:00.561878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.995 [2024-12-03 10:54:00.561912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.995 [2024-12-03 10:54:00.561921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:29.995 [2024-12-03 10:54:00.561929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:29.995 [2024-12-03 10:54:00.561937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.995 [2024-12-03 10:54:00.585994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.995 [2024-12-03 10:54:00.586022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:29.995 [2024-12-03 10:54:00.586030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.024 ms 00:26:29.995 [2024-12-03 10:54:00.586038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.995 [2024-12-03 10:54:00.586074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.995 [2024-12-03 10:54:00.586083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:29.995 [2024-12-03 10:54:00.586090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:29.995 [2024-12-03 10:54:00.586098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.995 [2024-12-03 10:54:00.586410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.995 [2024-12-03 10:54:00.586437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:29.995 [2024-12-03 10:54:00.586445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:26:29.995 [2024-12-03 10:54:00.586453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.995 [2024-12-03 10:54:00.586489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.995 [2024-12-03 10:54:00.586503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:29.995 [2024-12-03 10:54:00.586510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:29.995 [2024-12-03 10:54:00.586517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.995 [2024-12-03 10:54:00.598770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.995 [2024-12-03 10:54:00.598797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:29.995 [2024-12-03 10:54:00.598804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.239 ms 00:26:29.995 [2024-12-03 10:54:00.598812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.257 [2024-12-03 10:54:00.608003] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:30.257 [2024-12-03 10:54:00.608760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.257 [2024-12-03 10:54:00.608783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:30.257 [2024-12-03 10:54:00.608791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.890 ms 00:26:30.257 [2024-12-03 10:54:00.608797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.257 [2024-12-03 10:54:00.630909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.257 [2024-12-03 10:54:00.630940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:30.257 [2024-12-03 10:54:00.630950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.090 ms 00:26:30.257 [2024-12-03 10:54:00.630957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.257 [2024-12-03 10:54:00.630991] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:26:30.257 [2024-12-03 10:54:00.631000] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:26:34.468 [2024-12-03 10:54:04.373579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.468 [2024-12-03 10:54:04.373668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:34.468 [2024-12-03 10:54:04.373689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3742.557 ms 00:26:34.468 [2024-12-03 10:54:04.373699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.468 [2024-12-03 10:54:04.373829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.468 [2024-12-03 10:54:04.373842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:34.468 [2024-12-03 10:54:04.373859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:26:34.468 [2024-12-03 10:54:04.373868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.468 [2024-12-03 10:54:04.400170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.468 [2024-12-03 10:54:04.400223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:34.468 [2024-12-03 10:54:04.400240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.237 ms 00:26:34.468 [2024-12-03 10:54:04.400249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.468 [2024-12-03 10:54:04.425792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.468 [2024-12-03 10:54:04.425845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:34.468 [2024-12-03 10:54:04.425865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.483 ms 00:26:34.468 [2024-12-03 10:54:04.425873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.468 [2024-12-03 10:54:04.426240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.468 [2024-12-03 10:54:04.426252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:34.468 [2024-12-03 10:54:04.426264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.316 ms 00:26:34.468 [2024-12-03 10:54:04.426272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.468 [2024-12-03 10:54:04.502007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.468 [2024-12-03 10:54:04.502077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:34.468 [2024-12-03 10:54:04.502094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 75.683 ms 00:26:34.468 [2024-12-03 10:54:04.502103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.468 [2024-12-03 10:54:04.529775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.468 [2024-12-03 10:54:04.529833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:34.468 [2024-12-03 10:54:04.529849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.610 ms 00:26:34.468 [2024-12-03 10:54:04.529857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.468 [2024-12-03 10:54:04.531328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.468 [2024-12-03 10:54:04.531377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:34.468 [2024-12-03 10:54:04.531392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.405 ms 00:26:34.468 [2024-12-03 10:54:04.531401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.468 [2024-12-03 10:54:04.558478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.468 [2024-12-03 10:54:04.558531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:34.468 [2024-12-03 10:54:04.558546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.023 ms 00:26:34.468 [2024-12-03 10:54:04.558553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.468 [2024-12-03 10:54:04.558613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.468 [2024-12-03 10:54:04.558623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:34.468 [2024-12-03 10:54:04.558635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:34.468 [2024-12-03 10:54:04.558643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.468 [2024-12-03 10:54:04.558756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.468 [2024-12-03 10:54:04.558768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:34.468 [2024-12-03 10:54:04.558779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:26:34.468 [2024-12-03 10:54:04.558786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.468 [2024-12-03 10:54:04.559977] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4020.501 ms, result 0 00:26:34.468 { 00:26:34.468 "name": "ftl", 00:26:34.468 "uuid": "1527cf0e-44c7-4fd4-a655-126b687d8e76" 00:26:34.468 } 00:26:34.468 10:54:04 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:34.468 [2024-12-03 10:54:04.771080] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.468 10:54:04 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:34.468 10:54:04 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:34.728 [2024-12-03 10:54:05.183532] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:34.728 10:54:05 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:34.995 [2024-12-03 10:54:05.393190] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:34.995 10:54:05 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:35.254 Fill FTL, iteration 1 00:26:35.254 10:54:05 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:35.254 10:54:05 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:35.254 10:54:05 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:35.254 10:54:05 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:35.254 10:54:05 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:35.254 10:54:05 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:35.254 10:54:05 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:35.254 10:54:05 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:35.254 10:54:05 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:35.254 10:54:05 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:35.254 10:54:05 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:35.254 10:54:05 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:35.254 10:54:05 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:35.254 10:54:05 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:35.254 10:54:05 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:35.255 10:54:05 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:35.255 10:54:05 -- ftl/common.sh@163 -- # spdk_ini_pid=79182 00:26:35.255 10:54:05 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:35.255 10:54:05 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:35.255 10:54:05 -- ftl/common.sh@165 -- # waitforlisten 79182 /var/tmp/spdk.tgt.sock 00:26:35.255 10:54:05 -- common/autotest_common.sh@829 -- # '[' -z 79182 ']' 00:26:35.255 10:54:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:35.255 10:54:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:35.255 10:54:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:35.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:35.255 10:54:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:35.255 10:54:05 -- common/autotest_common.sh@10 -- # set +x 00:26:35.255 [2024-12-03 10:54:05.776771] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:35.255 [2024-12-03 10:54:05.777213] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79182 ] 00:26:35.513 [2024-12-03 10:54:05.922935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.513 [2024-12-03 10:54:06.064448] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:35.513 [2024-12-03 10:54:06.064596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.084 10:54:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.084 10:54:06 -- common/autotest_common.sh@862 -- # return 0 00:26:36.084 10:54:06 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:36.345 ftln1 00:26:36.345 10:54:06 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:36.345 10:54:06 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:36.606 10:54:07 -- ftl/common.sh@173 -- # echo ']}' 00:26:36.606 10:54:07 -- ftl/common.sh@176 -- # killprocess 79182 00:26:36.606 10:54:07 -- common/autotest_common.sh@936 -- # '[' -z 79182 ']' 00:26:36.606 10:54:07 -- common/autotest_common.sh@940 -- # kill -0 79182 00:26:36.606 10:54:07 -- common/autotest_common.sh@941 -- # uname 00:26:36.606 10:54:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:36.606 10:54:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79182 00:26:36.606 10:54:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:36.606 10:54:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:36.606 killing process with pid 79182 00:26:36.606 10:54:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79182' 00:26:36.606 10:54:07 -- common/autotest_common.sh@955 -- # kill 79182 00:26:36.606 10:54:07 -- common/autotest_common.sh@960 -- # wait 79182 00:26:38.516 10:54:08 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:38.516 10:54:08 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:38.517 [2024-12-03 10:54:08.691373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:38.517 [2024-12-03 10:54:08.691474] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79224 ] 00:26:38.517 [2024-12-03 10:54:08.840147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.517 [2024-12-03 10:54:09.030095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.899  [2024-12-03T10:54:11.451Z] Copying: 201/1024 [MB] (201 MBps) [2024-12-03T10:54:12.825Z] Copying: 415/1024 [MB] (214 MBps) [2024-12-03T10:54:13.757Z] Copying: 668/1024 [MB] (253 MBps) [2024-12-03T10:54:14.016Z] Copying: 901/1024 [MB] (233 MBps) [2024-12-03T10:54:14.951Z] Copying: 1024/1024 [MB] (average 225 MBps) 00:26:44.338 00:26:44.338 10:54:14 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:44.338 Calculate MD5 checksum, iteration 1 00:26:44.338 10:54:14 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:44.338 10:54:14 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:44.338 10:54:14 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:44.338 10:54:14 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:44.338 10:54:14 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:44.338 10:54:14 -- ftl/common.sh@154 -- # return 0 00:26:44.338 10:54:14 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:44.338 [2024-12-03 10:54:14.680828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:44.338 [2024-12-03 10:54:14.680933] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79292 ] 00:26:44.338 [2024-12-03 10:54:14.826771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.596 [2024-12-03 10:54:14.992618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.970  [2024-12-03T10:54:17.150Z] Copying: 637/1024 [MB] (637 MBps) [2024-12-03T10:54:17.718Z] Copying: 1024/1024 [MB] (average 632 MBps) 00:26:47.105 00:26:47.105 10:54:17 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:47.105 10:54:17 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:49.647 10:54:19 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:49.647 10:54:19 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=0adda0dfa664a7d16dbb2651c290a3ea 00:26:49.647 10:54:19 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:49.647 10:54:19 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:49.647 Fill FTL, iteration 2 00:26:49.647 10:54:19 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:49.647 10:54:19 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:49.647 10:54:19 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:49.647 10:54:19 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:49.647 10:54:19 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:49.647 10:54:19 -- ftl/common.sh@154 -- # return 0 00:26:49.647 10:54:19 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:49.647 [2024-12-03 10:54:19.701442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:49.647 [2024-12-03 10:54:19.701524] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79349 ] 00:26:49.647 [2024-12-03 10:54:19.842275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.647 [2024-12-03 10:54:20.012215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.088  [2024-12-03T10:54:22.638Z] Copying: 250/1024 [MB] (250 MBps) [2024-12-03T10:54:23.343Z] Copying: 488/1024 [MB] (238 MBps) [2024-12-03T10:54:24.721Z] Copying: 724/1024 [MB] (236 MBps) [2024-12-03T10:54:24.721Z] Copying: 967/1024 [MB] (243 MBps) [2024-12-03T10:54:25.661Z] Copying: 1024/1024 [MB] (average 241 MBps) 00:26:55.048 00:26:55.048 10:54:25 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:55.048 Calculate MD5 checksum, iteration 2 00:26:55.048 10:54:25 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:55.048 10:54:25 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:55.048 10:54:25 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:55.048 10:54:25 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:55.048 10:54:25 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:55.048 10:54:25 -- ftl/common.sh@154 -- # return 0 00:26:55.048 10:54:25 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:55.048 [2024-12-03 10:54:25.584761] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:55.048 [2024-12-03 10:54:25.584866] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79413 ] 00:26:55.310 [2024-12-03 10:54:25.733106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.571 [2024-12-03 10:54:25.984875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.478  [2024-12-03T10:54:28.350Z] Copying: 660/1024 [MB] (660 MBps) [2024-12-03T10:54:29.728Z] Copying: 1024/1024 [MB] (average 648 MBps) 00:26:59.115 00:26:59.115 10:54:29 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:59.115 10:54:29 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:01.022 10:54:31 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:01.022 10:54:31 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=0e3b3c6c4b64f8fe8438b6a9c014b08f 00:27:01.022 10:54:31 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:01.022 10:54:31 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:01.022 10:54:31 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:01.281 [2024-12-03 10:54:31.802239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.281 [2024-12-03 10:54:31.802278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:01.281 [2024-12-03 10:54:31.802289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:01.281 [2024-12-03 10:54:31.802298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.281 [2024-12-03 10:54:31.802316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.281 [2024-12-03 10:54:31.802323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:01.281 [2024-12-03 10:54:31.802329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:01.281 [2024-12-03 10:54:31.802335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.281 [2024-12-03 10:54:31.802350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.281 [2024-12-03 10:54:31.802356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:01.281 [2024-12-03 10:54:31.802367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:01.281 [2024-12-03 10:54:31.802373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.281 [2024-12-03 10:54:31.802422] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.172 ms, result 0 00:27:01.281 true 00:27:01.281 10:54:31 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:01.541 { 00:27:01.541 "name": "ftl", 00:27:01.541 "properties": [ 00:27:01.541 { 00:27:01.541 "name": "superblock_version", 00:27:01.541 "value": 5, 00:27:01.541 "read-only": true 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "name": "base_device", 00:27:01.541 "bands": [ 00:27:01.541 { 00:27:01.541 "id": 0, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 1, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 2, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 3, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 4, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 5, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 6, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 7, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 8, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 9, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 10, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 11, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 12, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 13, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 14, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 15, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 16, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 17, 00:27:01.541 "state": "FREE", 00:27:01.541 "validity": 0.0 00:27:01.541 } 00:27:01.541 ], 00:27:01.541 "read-only": true 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "name": "cache_device", 00:27:01.541 "type": "bdev", 00:27:01.541 "chunks": [ 00:27:01.541 { 00:27:01.541 "id": 0, 00:27:01.541 "state": "CLOSED", 00:27:01.541 "utilization": 1.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 1, 00:27:01.541 "state": "CLOSED", 00:27:01.541 "utilization": 1.0 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 2, 00:27:01.541 "state": "OPEN", 00:27:01.541 "utilization": 0.001953125 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "id": 3, 00:27:01.541 "state": "OPEN", 00:27:01.541 "utilization": 0.0 00:27:01.541 } 00:27:01.541 ], 00:27:01.541 "read-only": true 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "name": "verbose_mode", 00:27:01.541 "value": true, 00:27:01.541 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:01.541 }, 00:27:01.541 { 00:27:01.541 "name": "prep_upgrade_on_shutdown", 00:27:01.541 "value": false, 00:27:01.541 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:01.541 } 00:27:01.541 ] 00:27:01.541 } 00:27:01.541 10:54:32 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:01.800 [2024-12-03 10:54:32.178555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.800 [2024-12-03 10:54:32.178589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:01.800 [2024-12-03 10:54:32.178599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:01.800 [2024-12-03 10:54:32.178605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.800 [2024-12-03 10:54:32.178621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.800 [2024-12-03 10:54:32.178627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:01.800 [2024-12-03 10:54:32.178632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:01.800 [2024-12-03 10:54:32.178638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.800 [2024-12-03 10:54:32.178652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.800 [2024-12-03 10:54:32.178658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:01.800 [2024-12-03 10:54:32.178663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:01.800 [2024-12-03 10:54:32.178669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.800 [2024-12-03 10:54:32.178711] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.148 ms, result 0 00:27:01.800 true 00:27:01.800 10:54:32 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:01.800 10:54:32 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:01.800 10:54:32 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:01.800 10:54:32 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:01.800 10:54:32 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:01.800 10:54:32 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:02.061 [2024-12-03 10:54:32.562897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.061 [2024-12-03 10:54:32.562932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:02.061 [2024-12-03 10:54:32.562941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:02.061 [2024-12-03 10:54:32.562947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.061 [2024-12-03 10:54:32.562963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.061 [2024-12-03 10:54:32.562970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:02.061 [2024-12-03 10:54:32.562977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:02.061 [2024-12-03 10:54:32.562983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.061 [2024-12-03 10:54:32.562997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.061 [2024-12-03 10:54:32.563003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:02.062 [2024-12-03 10:54:32.563008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:02.062 [2024-12-03 10:54:32.563014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.062 [2024-12-03 10:54:32.563066] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.151 ms, result 0 00:27:02.062 true 00:27:02.062 10:54:32 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:02.321 { 00:27:02.321 "name": "ftl", 00:27:02.321 "properties": [ 00:27:02.321 { 00:27:02.321 "name": "superblock_version", 00:27:02.321 "value": 5, 00:27:02.321 "read-only": true 00:27:02.321 }, 00:27:02.321 { 00:27:02.321 "name": "base_device", 00:27:02.321 "bands": [ 00:27:02.321 { 00:27:02.321 "id": 0, 00:27:02.321 "state": "FREE", 00:27:02.321 "validity": 0.0 00:27:02.321 }, 00:27:02.321 { 00:27:02.321 "id": 1, 00:27:02.321 "state": "FREE", 00:27:02.321 "validity": 0.0 00:27:02.321 }, 00:27:02.321 { 00:27:02.321 "id": 2, 00:27:02.321 "state": "FREE", 00:27:02.321 "validity": 0.0 00:27:02.321 }, 00:27:02.321 { 00:27:02.321 "id": 3, 00:27:02.321 "state": "FREE", 00:27:02.321 "validity": 0.0 00:27:02.321 }, 00:27:02.321 { 00:27:02.321 "id": 4, 00:27:02.321 "state": "FREE", 00:27:02.321 "validity": 0.0 00:27:02.321 }, 00:27:02.321 { 00:27:02.321 "id": 5, 00:27:02.321 "state": "FREE", 00:27:02.321 "validity": 0.0 00:27:02.321 }, 00:27:02.321 { 00:27:02.321 "id": 6, 00:27:02.321 "state": "FREE", 00:27:02.321 "validity": 0.0 00:27:02.321 }, 00:27:02.321 { 00:27:02.321 "id": 7, 00:27:02.321 "state": "FREE", 00:27:02.321 "validity": 0.0 00:27:02.321 }, 00:27:02.321 { 00:27:02.321 "id": 8, 00:27:02.321 "state": "FREE", 00:27:02.321 "validity": 0.0 00:27:02.321 }, 00:27:02.321 { 00:27:02.321 "id": 9, 00:27:02.321 "state": "FREE", 00:27:02.321 "validity": 0.0 00:27:02.321 }, 00:27:02.321 { 00:27:02.321 "id": 10, 00:27:02.321 "state": "FREE", 00:27:02.321 "validity": 0.0 00:27:02.321 }, 00:27:02.321 { 00:27:02.321 "id": 11, 00:27:02.321 "state": "FREE", 00:27:02.321 "validity": 0.0 00:27:02.321 }, 00:27:02.321 { 00:27:02.321 "id": 12, 00:27:02.321 "state": "FREE", 00:27:02.321 "validity": 0.0 00:27:02.321 }, 00:27:02.321 { 00:27:02.321 "id": 13, 00:27:02.321 "state": "FREE", 00:27:02.322 "validity": 0.0 00:27:02.322 }, 00:27:02.322 { 00:27:02.322 "id": 14, 00:27:02.322 "state": "FREE", 00:27:02.322 "validity": 0.0 00:27:02.322 }, 00:27:02.322 { 00:27:02.322 "id": 15, 00:27:02.322 "state": "FREE", 00:27:02.322 "validity": 0.0 00:27:02.322 }, 00:27:02.322 { 00:27:02.322 "id": 16, 00:27:02.322 "state": "FREE", 00:27:02.322 "validity": 0.0 00:27:02.322 }, 00:27:02.322 { 00:27:02.322 "id": 17, 00:27:02.322 "state": "FREE", 00:27:02.322 "validity": 0.0 00:27:02.322 } 00:27:02.322 ], 00:27:02.322 "read-only": true 00:27:02.322 }, 00:27:02.322 { 00:27:02.322 "name": "cache_device", 00:27:02.322 "type": "bdev", 00:27:02.322 "chunks": [ 00:27:02.322 { 00:27:02.322 "id": 0, 00:27:02.322 "state": "CLOSED", 00:27:02.322 "utilization": 1.0 00:27:02.322 }, 00:27:02.322 { 00:27:02.322 "id": 1, 00:27:02.322 "state": "CLOSED", 00:27:02.322 "utilization": 1.0 00:27:02.322 }, 00:27:02.322 { 00:27:02.322 "id": 2, 00:27:02.322 "state": "OPEN", 00:27:02.322 "utilization": 0.001953125 00:27:02.322 }, 00:27:02.322 { 00:27:02.322 "id": 3, 00:27:02.322 "state": "OPEN", 00:27:02.322 "utilization": 0.0 00:27:02.322 } 00:27:02.322 ], 00:27:02.322 "read-only": true 00:27:02.322 }, 00:27:02.322 { 00:27:02.322 "name": "verbose_mode", 00:27:02.322 "value": true, 00:27:02.322 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:02.322 }, 00:27:02.322 { 00:27:02.322 "name": "prep_upgrade_on_shutdown", 00:27:02.322 "value": true, 00:27:02.322 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:02.322 } 00:27:02.322 ] 00:27:02.322 } 00:27:02.322 10:54:32 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:02.322 10:54:32 -- ftl/common.sh@130 -- # [[ -n 79053 ]] 00:27:02.322 10:54:32 -- ftl/common.sh@131 -- # killprocess 79053 00:27:02.322 10:54:32 -- common/autotest_common.sh@936 -- # '[' -z 79053 ']' 00:27:02.322 10:54:32 -- common/autotest_common.sh@940 -- # kill -0 79053 00:27:02.322 10:54:32 -- common/autotest_common.sh@941 -- # uname 00:27:02.322 10:54:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:02.322 10:54:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79053 00:27:02.322 10:54:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:02.322 10:54:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:02.322 killing process with pid 79053 00:27:02.322 10:54:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79053' 00:27:02.322 10:54:32 -- common/autotest_common.sh@955 -- # kill 79053 00:27:02.322 10:54:32 -- common/autotest_common.sh@960 -- # wait 79053 00:27:02.894 [2024-12-03 10:54:33.303752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:02.894 [2024-12-03 10:54:33.316334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.894 [2024-12-03 10:54:33.316369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:02.894 [2024-12-03 10:54:33.316379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:02.894 [2024-12-03 10:54:33.316385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.894 [2024-12-03 10:54:33.316402] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:02.894 [2024-12-03 10:54:33.318401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.894 [2024-12-03 10:54:33.318427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:02.894 [2024-12-03 10:54:33.318434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.989 ms 00:27:02.894 [2024-12-03 10:54:33.318441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.029 [2024-12-03 10:54:41.589878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.029 [2024-12-03 10:54:41.589925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:11.029 [2024-12-03 10:54:41.589937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8271.386 ms 00:27:11.029 [2024-12-03 10:54:41.589943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.029 [2024-12-03 10:54:41.590991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.029 [2024-12-03 10:54:41.591009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:11.029 [2024-12-03 10:54:41.591017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.032 ms 00:27:11.029 [2024-12-03 10:54:41.591022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.029 [2024-12-03 10:54:41.591890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.029 [2024-12-03 10:54:41.591910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:11.029 [2024-12-03 10:54:41.591917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.846 ms 00:27:11.029 [2024-12-03 10:54:41.591923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.029 [2024-12-03 10:54:41.599816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.029 [2024-12-03 10:54:41.599845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:11.029 [2024-12-03 10:54:41.599852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.859 ms 00:27:11.029 [2024-12-03 10:54:41.599858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.029 [2024-12-03 10:54:41.605312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.029 [2024-12-03 10:54:41.605343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:11.029 [2024-12-03 10:54:41.605351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.428 ms 00:27:11.029 [2024-12-03 10:54:41.605358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.029 [2024-12-03 10:54:41.605416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.029 [2024-12-03 10:54:41.605424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:11.029 [2024-12-03 10:54:41.605431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:11.029 [2024-12-03 10:54:41.605440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.029 [2024-12-03 10:54:41.612659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.029 [2024-12-03 10:54:41.612693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:11.029 [2024-12-03 10:54:41.612700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.207 ms 00:27:11.029 [2024-12-03 10:54:41.612706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.029 [2024-12-03 10:54:41.619957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.029 [2024-12-03 10:54:41.619984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:11.029 [2024-12-03 10:54:41.619991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.226 ms 00:27:11.029 [2024-12-03 10:54:41.619996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.029 [2024-12-03 10:54:41.627202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.029 [2024-12-03 10:54:41.627227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:11.029 [2024-12-03 10:54:41.627234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.180 ms 00:27:11.029 [2024-12-03 10:54:41.627239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.029 [2024-12-03 10:54:41.634377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.029 [2024-12-03 10:54:41.634404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:11.029 [2024-12-03 10:54:41.634410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.091 ms 00:27:11.029 [2024-12-03 10:54:41.634416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.029 [2024-12-03 10:54:41.634439] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:11.029 [2024-12-03 10:54:41.634450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:11.029 [2024-12-03 10:54:41.634457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:11.029 [2024-12-03 10:54:41.634463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:11.029 [2024-12-03 10:54:41.634469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:11.029 [2024-12-03 10:54:41.634475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:11.029 [2024-12-03 10:54:41.634480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:11.029 [2024-12-03 10:54:41.634486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:11.029 [2024-12-03 10:54:41.634491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:11.030 [2024-12-03 10:54:41.634497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:11.030 [2024-12-03 10:54:41.634503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:11.030 [2024-12-03 10:54:41.634508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:11.030 [2024-12-03 10:54:41.634514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:11.030 [2024-12-03 10:54:41.634519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:11.030 [2024-12-03 10:54:41.634525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:11.030 [2024-12-03 10:54:41.634531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:11.030 [2024-12-03 10:54:41.634544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:11.030 [2024-12-03 10:54:41.634549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:11.030 [2024-12-03 10:54:41.634555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:11.030 [2024-12-03 10:54:41.634562] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:11.030 [2024-12-03 10:54:41.634568] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 1527cf0e-44c7-4fd4-a655-126b687d8e76 00:27:11.030 [2024-12-03 10:54:41.634574] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:11.030 [2024-12-03 10:54:41.634579] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:11.030 [2024-12-03 10:54:41.634585] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:11.030 [2024-12-03 10:54:41.634591] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:11.030 [2024-12-03 10:54:41.634596] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:11.030 [2024-12-03 10:54:41.634602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:11.030 [2024-12-03 10:54:41.634609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:11.030 [2024-12-03 10:54:41.634614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:11.030 [2024-12-03 10:54:41.634619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:11.030 [2024-12-03 10:54:41.634625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.030 [2024-12-03 10:54:41.634630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:11.030 [2024-12-03 10:54:41.634637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:27:11.030 [2024-12-03 10:54:41.634643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.644077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.290 [2024-12-03 10:54:41.644103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:11.290 [2024-12-03 10:54:41.644111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.415 ms 00:27:11.290 [2024-12-03 10:54:41.644117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.644268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.290 [2024-12-03 10:54:41.644280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:11.290 [2024-12-03 10:54:41.644287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.133 ms 00:27:11.290 [2024-12-03 10:54:41.644292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.678732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:11.290 [2024-12-03 10:54:41.678765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:11.290 [2024-12-03 10:54:41.678773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:11.290 [2024-12-03 10:54:41.678783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.678811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:11.290 [2024-12-03 10:54:41.678818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:11.290 [2024-12-03 10:54:41.678824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:11.290 [2024-12-03 10:54:41.678830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.678878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:11.290 [2024-12-03 10:54:41.678886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:11.290 [2024-12-03 10:54:41.678891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:11.290 [2024-12-03 10:54:41.678897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.678911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:11.290 [2024-12-03 10:54:41.678917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:11.290 [2024-12-03 10:54:41.678923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:11.290 [2024-12-03 10:54:41.678928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.738121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:11.290 [2024-12-03 10:54:41.738158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:11.290 [2024-12-03 10:54:41.738167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:11.290 [2024-12-03 10:54:41.738174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.760776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:11.290 [2024-12-03 10:54:41.760808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:11.290 [2024-12-03 10:54:41.760817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:11.290 [2024-12-03 10:54:41.760823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.760869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:11.290 [2024-12-03 10:54:41.760876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:11.290 [2024-12-03 10:54:41.760882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:11.290 [2024-12-03 10:54:41.760887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.760917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:11.290 [2024-12-03 10:54:41.760928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:11.290 [2024-12-03 10:54:41.760933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:11.290 [2024-12-03 10:54:41.760939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.761007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:11.290 [2024-12-03 10:54:41.761014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:11.290 [2024-12-03 10:54:41.761020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:11.290 [2024-12-03 10:54:41.761026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.761047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:11.290 [2024-12-03 10:54:41.761067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:11.290 [2024-12-03 10:54:41.761075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:11.290 [2024-12-03 10:54:41.761081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.761108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:11.290 [2024-12-03 10:54:41.761114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:11.290 [2024-12-03 10:54:41.761121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:11.290 [2024-12-03 10:54:41.761126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.761162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:11.290 [2024-12-03 10:54:41.761174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:11.290 [2024-12-03 10:54:41.761181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:11.290 [2024-12-03 10:54:41.761186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.290 [2024-12-03 10:54:41.761277] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8444.890 ms, result 0 00:27:12.676 10:54:43 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:12.676 10:54:43 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:12.676 10:54:43 -- ftl/common.sh@81 -- # local base_bdev= 00:27:12.676 10:54:43 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:12.676 10:54:43 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:12.676 10:54:43 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:12.676 10:54:43 -- ftl/common.sh@89 -- # spdk_tgt_pid=79611 00:27:12.676 10:54:43 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:12.676 10:54:43 -- ftl/common.sh@91 -- # waitforlisten 79611 00:27:12.676 10:54:43 -- common/autotest_common.sh@829 -- # '[' -z 79611 ']' 00:27:12.676 10:54:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.676 10:54:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:12.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.676 10:54:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.676 10:54:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:12.676 10:54:43 -- common/autotest_common.sh@10 -- # set +x 00:27:12.676 [2024-12-03 10:54:43.081579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:12.676 [2024-12-03 10:54:43.081695] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79611 ] 00:27:12.676 [2024-12-03 10:54:43.226581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.936 [2024-12-03 10:54:43.367346] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:12.936 [2024-12-03 10:54:43.367517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.504 [2024-12-03 10:54:43.901344] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:13.504 [2024-12-03 10:54:43.901392] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:13.504 [2024-12-03 10:54:44.037504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.504 [2024-12-03 10:54:44.037543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:13.504 [2024-12-03 10:54:44.037553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:13.504 [2024-12-03 10:54:44.037560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.504 [2024-12-03 10:54:44.037601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.504 [2024-12-03 10:54:44.037611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:13.504 [2024-12-03 10:54:44.037617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:13.504 [2024-12-03 10:54:44.037623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.504 [2024-12-03 10:54:44.037637] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:13.504 [2024-12-03 10:54:44.038191] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:13.504 [2024-12-03 10:54:44.038214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.504 [2024-12-03 10:54:44.038220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:13.504 [2024-12-03 10:54:44.038226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.580 ms 00:27:13.504 [2024-12-03 10:54:44.038232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.504 [2024-12-03 10:54:44.039187] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:13.504 [2024-12-03 10:54:44.049146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.504 [2024-12-03 10:54:44.049177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:13.504 [2024-12-03 10:54:44.049186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.961 ms 00:27:13.504 [2024-12-03 10:54:44.049193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.504 [2024-12-03 10:54:44.049242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.504 [2024-12-03 10:54:44.049249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:13.504 [2024-12-03 10:54:44.049255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:13.504 [2024-12-03 10:54:44.049261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.504 [2024-12-03 10:54:44.053698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.504 [2024-12-03 10:54:44.053725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:13.504 [2024-12-03 10:54:44.053732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.389 ms 00:27:13.504 [2024-12-03 10:54:44.053742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.504 [2024-12-03 10:54:44.053771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.504 [2024-12-03 10:54:44.053777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:13.504 [2024-12-03 10:54:44.053783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:13.504 [2024-12-03 10:54:44.053789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.504 [2024-12-03 10:54:44.053824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.504 [2024-12-03 10:54:44.053830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:13.504 [2024-12-03 10:54:44.053836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:13.504 [2024-12-03 10:54:44.053842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.504 [2024-12-03 10:54:44.053863] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:13.504 [2024-12-03 10:54:44.056640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.504 [2024-12-03 10:54:44.056665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:13.504 [2024-12-03 10:54:44.056674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.785 ms 00:27:13.504 [2024-12-03 10:54:44.056680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.504 [2024-12-03 10:54:44.056702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.504 [2024-12-03 10:54:44.056708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:13.504 [2024-12-03 10:54:44.056715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:13.504 [2024-12-03 10:54:44.056720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.504 [2024-12-03 10:54:44.056736] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:13.504 [2024-12-03 10:54:44.056751] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:13.504 [2024-12-03 10:54:44.056777] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:13.504 [2024-12-03 10:54:44.056791] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:13.504 [2024-12-03 10:54:44.056847] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:13.504 [2024-12-03 10:54:44.056854] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:13.504 [2024-12-03 10:54:44.056862] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:13.504 [2024-12-03 10:54:44.056869] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:13.504 [2024-12-03 10:54:44.056876] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:13.504 [2024-12-03 10:54:44.056882] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:13.504 [2024-12-03 10:54:44.056890] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:13.505 [2024-12-03 10:54:44.056895] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:13.505 [2024-12-03 10:54:44.056902] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:13.505 [2024-12-03 10:54:44.056908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.505 [2024-12-03 10:54:44.056913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:13.505 [2024-12-03 10:54:44.056919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.173 ms 00:27:13.505 [2024-12-03 10:54:44.056924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.505 [2024-12-03 10:54:44.056971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.505 [2024-12-03 10:54:44.056978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:13.505 [2024-12-03 10:54:44.056983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:13.505 [2024-12-03 10:54:44.056988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.505 [2024-12-03 10:54:44.057046] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:13.505 [2024-12-03 10:54:44.057075] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:13.505 [2024-12-03 10:54:44.057082] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:13.505 [2024-12-03 10:54:44.057088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:13.505 [2024-12-03 10:54:44.057094] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:13.505 [2024-12-03 10:54:44.057099] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:13.505 [2024-12-03 10:54:44.057104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:13.505 [2024-12-03 10:54:44.057109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:13.505 [2024-12-03 10:54:44.057114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:13.505 [2024-12-03 10:54:44.057118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:13.505 [2024-12-03 10:54:44.057125] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:13.505 [2024-12-03 10:54:44.057130] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:13.505 [2024-12-03 10:54:44.057135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:13.505 [2024-12-03 10:54:44.057140] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:13.505 [2024-12-03 10:54:44.057145] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:13.505 [2024-12-03 10:54:44.057150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:13.505 [2024-12-03 10:54:44.057155] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:13.505 [2024-12-03 10:54:44.057160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:13.505 [2024-12-03 10:54:44.057165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:13.505 [2024-12-03 10:54:44.057169] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:13.505 [2024-12-03 10:54:44.057174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:13.505 [2024-12-03 10:54:44.057179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:13.505 [2024-12-03 10:54:44.057184] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:13.505 [2024-12-03 10:54:44.057189] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:13.505 [2024-12-03 10:54:44.057194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:13.505 [2024-12-03 10:54:44.057199] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:13.505 [2024-12-03 10:54:44.057203] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:13.505 [2024-12-03 10:54:44.057208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:13.505 [2024-12-03 10:54:44.057213] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:13.505 [2024-12-03 10:54:44.057218] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:13.505 [2024-12-03 10:54:44.057222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:13.505 [2024-12-03 10:54:44.057227] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:13.505 [2024-12-03 10:54:44.057232] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:13.505 [2024-12-03 10:54:44.057236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:13.505 [2024-12-03 10:54:44.057241] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:13.505 [2024-12-03 10:54:44.057245] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:13.505 [2024-12-03 10:54:44.057250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:13.505 [2024-12-03 10:54:44.057255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:13.505 [2024-12-03 10:54:44.057260] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:13.505 [2024-12-03 10:54:44.057264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:13.505 [2024-12-03 10:54:44.057269] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:13.505 [2024-12-03 10:54:44.057274] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:13.505 [2024-12-03 10:54:44.057283] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:13.505 [2024-12-03 10:54:44.057289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:13.505 [2024-12-03 10:54:44.057295] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:13.505 [2024-12-03 10:54:44.057300] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:13.505 [2024-12-03 10:54:44.057304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:13.505 [2024-12-03 10:54:44.057309] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:13.505 [2024-12-03 10:54:44.057314] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:13.505 [2024-12-03 10:54:44.057319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:13.505 [2024-12-03 10:54:44.057325] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:13.505 [2024-12-03 10:54:44.057332] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:13.505 [2024-12-03 10:54:44.057340] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:13.505 [2024-12-03 10:54:44.057346] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:13.505 [2024-12-03 10:54:44.057351] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:13.505 [2024-12-03 10:54:44.057357] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:13.505 [2024-12-03 10:54:44.057362] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:13.505 [2024-12-03 10:54:44.057372] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:13.505 [2024-12-03 10:54:44.057377] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:13.505 [2024-12-03 10:54:44.057383] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:13.505 [2024-12-03 10:54:44.057388] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:13.505 [2024-12-03 10:54:44.057394] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:13.505 [2024-12-03 10:54:44.057399] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:13.505 [2024-12-03 10:54:44.057404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:13.505 [2024-12-03 10:54:44.057410] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:13.505 [2024-12-03 10:54:44.057415] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:13.505 [2024-12-03 10:54:44.057421] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:13.505 [2024-12-03 10:54:44.057427] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:13.505 [2024-12-03 10:54:44.057432] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:13.505 [2024-12-03 10:54:44.057438] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:13.505 [2024-12-03 10:54:44.057443] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:13.505 [2024-12-03 10:54:44.057449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.505 [2024-12-03 10:54:44.057454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:13.505 [2024-12-03 10:54:44.057460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.436 ms 00:27:13.505 [2024-12-03 10:54:44.057467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.505 [2024-12-03 10:54:44.069269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.505 [2024-12-03 10:54:44.069297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:13.505 [2024-12-03 10:54:44.069305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.769 ms 00:27:13.505 [2024-12-03 10:54:44.069310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.505 [2024-12-03 10:54:44.069338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.505 [2024-12-03 10:54:44.069344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:13.505 [2024-12-03 10:54:44.069351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:13.505 [2024-12-03 10:54:44.069357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.505 [2024-12-03 10:54:44.093365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.505 [2024-12-03 10:54:44.093392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:13.505 [2024-12-03 10:54:44.093401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.969 ms 00:27:13.505 [2024-12-03 10:54:44.093407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.505 [2024-12-03 10:54:44.093428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.505 [2024-12-03 10:54:44.093435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:13.505 [2024-12-03 10:54:44.093442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:13.505 [2024-12-03 10:54:44.093448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.505 [2024-12-03 10:54:44.093770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.505 [2024-12-03 10:54:44.093793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:13.506 [2024-12-03 10:54:44.093801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.286 ms 00:27:13.506 [2024-12-03 10:54:44.093807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.506 [2024-12-03 10:54:44.093838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.506 [2024-12-03 10:54:44.093844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:13.506 [2024-12-03 10:54:44.093850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:13.506 [2024-12-03 10:54:44.093855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.506 [2024-12-03 10:54:44.105827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.506 [2024-12-03 10:54:44.105855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:13.506 [2024-12-03 10:54:44.105863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.954 ms 00:27:13.506 [2024-12-03 10:54:44.105869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.765 [2024-12-03 10:54:44.115702] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:13.765 [2024-12-03 10:54:44.115732] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:13.765 [2024-12-03 10:54:44.115740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.765 [2024-12-03 10:54:44.115747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:13.765 [2024-12-03 10:54:44.115754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.798 ms 00:27:13.765 [2024-12-03 10:54:44.115765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.765 [2024-12-03 10:54:44.126403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.765 [2024-12-03 10:54:44.126432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:13.765 [2024-12-03 10:54:44.126440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.608 ms 00:27:13.765 [2024-12-03 10:54:44.126447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.765 [2024-12-03 10:54:44.135129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.765 [2024-12-03 10:54:44.135156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:13.765 [2024-12-03 10:54:44.135163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.651 ms 00:27:13.765 [2024-12-03 10:54:44.135168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.765 [2024-12-03 10:54:44.144046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.765 [2024-12-03 10:54:44.144077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:13.765 [2024-12-03 10:54:44.144084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.850 ms 00:27:13.765 [2024-12-03 10:54:44.144090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.765 [2024-12-03 10:54:44.144371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.765 [2024-12-03 10:54:44.144387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:13.765 [2024-12-03 10:54:44.144394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:27:13.765 [2024-12-03 10:54:44.144399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.765 [2024-12-03 10:54:44.190030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.765 [2024-12-03 10:54:44.190067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:13.765 [2024-12-03 10:54:44.190077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 45.616 ms 00:27:13.766 [2024-12-03 10:54:44.190084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.766 [2024-12-03 10:54:44.197916] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:13.766 [2024-12-03 10:54:44.198459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.766 [2024-12-03 10:54:44.198484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:13.766 [2024-12-03 10:54:44.198492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.339 ms 00:27:13.766 [2024-12-03 10:54:44.198501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.766 [2024-12-03 10:54:44.198547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.766 [2024-12-03 10:54:44.198555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:13.766 [2024-12-03 10:54:44.198561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:13.766 [2024-12-03 10:54:44.198567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.766 [2024-12-03 10:54:44.198596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.766 [2024-12-03 10:54:44.198604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:13.766 [2024-12-03 10:54:44.198610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:13.766 [2024-12-03 10:54:44.198616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.766 [2024-12-03 10:54:44.199577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.766 [2024-12-03 10:54:44.199603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:13.766 [2024-12-03 10:54:44.199611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.945 ms 00:27:13.766 [2024-12-03 10:54:44.199616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.766 [2024-12-03 10:54:44.199636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.766 [2024-12-03 10:54:44.199643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:13.766 [2024-12-03 10:54:44.199649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:13.766 [2024-12-03 10:54:44.199654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.766 [2024-12-03 10:54:44.199682] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:13.766 [2024-12-03 10:54:44.199689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.766 [2024-12-03 10:54:44.199697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:13.766 [2024-12-03 10:54:44.199703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:13.766 [2024-12-03 10:54:44.199708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.766 [2024-12-03 10:54:44.217260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.766 [2024-12-03 10:54:44.217287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:13.766 [2024-12-03 10:54:44.217295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.537 ms 00:27:13.766 [2024-12-03 10:54:44.217301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.766 [2024-12-03 10:54:44.217357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.766 [2024-12-03 10:54:44.217364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:13.766 [2024-12-03 10:54:44.217370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:13.766 [2024-12-03 10:54:44.217376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.766 [2024-12-03 10:54:44.218099] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 180.274 ms, result 0 00:27:13.766 [2024-12-03 10:54:44.233522] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.766 [2024-12-03 10:54:44.249521] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:13.766 [2024-12-03 10:54:44.257615] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:14.027 10:54:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:14.027 10:54:44 -- common/autotest_common.sh@862 -- # return 0 00:27:14.027 10:54:44 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:14.027 10:54:44 -- ftl/common.sh@95 -- # return 0 00:27:14.027 10:54:44 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:14.288 [2024-12-03 10:54:44.754535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.288 [2024-12-03 10:54:44.754571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:14.288 [2024-12-03 10:54:44.754581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:14.288 [2024-12-03 10:54:44.754588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.288 [2024-12-03 10:54:44.754605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.288 [2024-12-03 10:54:44.754611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:14.288 [2024-12-03 10:54:44.754617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:14.288 [2024-12-03 10:54:44.754625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.288 [2024-12-03 10:54:44.754640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:14.288 [2024-12-03 10:54:44.754646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:14.288 [2024-12-03 10:54:44.754652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:14.288 [2024-12-03 10:54:44.754657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:14.288 [2024-12-03 10:54:44.754705] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.160 ms, result 0 00:27:14.288 true 00:27:14.288 10:54:44 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:14.551 { 00:27:14.551 "name": "ftl", 00:27:14.551 "properties": [ 00:27:14.551 { 00:27:14.551 "name": "superblock_version", 00:27:14.551 "value": 5, 00:27:14.551 "read-only": true 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "name": "base_device", 00:27:14.551 "bands": [ 00:27:14.551 { 00:27:14.551 "id": 0, 00:27:14.551 "state": "CLOSED", 00:27:14.551 "validity": 1.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 1, 00:27:14.551 "state": "CLOSED", 00:27:14.551 "validity": 1.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 2, 00:27:14.551 "state": "CLOSED", 00:27:14.551 "validity": 0.007843137254901933 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 3, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 4, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 5, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 6, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 7, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 8, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 9, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 10, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 11, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 12, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 13, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 14, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 15, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 16, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 17, 00:27:14.551 "state": "FREE", 00:27:14.551 "validity": 0.0 00:27:14.551 } 00:27:14.551 ], 00:27:14.551 "read-only": true 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "name": "cache_device", 00:27:14.551 "type": "bdev", 00:27:14.551 "chunks": [ 00:27:14.551 { 00:27:14.551 "id": 0, 00:27:14.551 "state": "OPEN", 00:27:14.551 "utilization": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 1, 00:27:14.551 "state": "OPEN", 00:27:14.551 "utilization": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 2, 00:27:14.551 "state": "FREE", 00:27:14.551 "utilization": 0.0 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "id": 3, 00:27:14.551 "state": "FREE", 00:27:14.551 "utilization": 0.0 00:27:14.551 } 00:27:14.551 ], 00:27:14.551 "read-only": true 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "name": "verbose_mode", 00:27:14.551 "value": true, 00:27:14.551 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:14.551 }, 00:27:14.551 { 00:27:14.551 "name": "prep_upgrade_on_shutdown", 00:27:14.551 "value": false, 00:27:14.551 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:14.551 } 00:27:14.551 ] 00:27:14.551 } 00:27:14.551 10:54:44 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:14.551 10:54:44 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:14.551 10:54:44 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:14.551 10:54:45 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:14.551 10:54:45 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:14.551 10:54:45 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:14.551 10:54:45 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:14.551 10:54:45 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:14.811 10:54:45 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:14.811 10:54:45 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:14.811 10:54:45 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:14.811 10:54:45 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:14.811 10:54:45 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:14.811 10:54:45 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:14.811 Validate MD5 checksum, iteration 1 00:27:14.811 10:54:45 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:14.811 10:54:45 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:14.811 10:54:45 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:14.811 10:54:45 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:14.811 10:54:45 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:14.811 10:54:45 -- ftl/common.sh@154 -- # return 0 00:27:14.811 10:54:45 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:14.811 [2024-12-03 10:54:45.414968] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:14.811 [2024-12-03 10:54:45.415088] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79650 ] 00:27:15.069 [2024-12-03 10:54:45.563572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.329 [2024-12-03 10:54:45.734786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.716  [2024-12-03T10:54:48.273Z] Copying: 544/1024 [MB] (544 MBps) [2024-12-03T10:54:50.184Z] Copying: 1024/1024 [MB] (average 512 MBps) 00:27:19.571 00:27:19.571 10:54:49 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:19.571 10:54:49 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:21.478 10:54:51 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:21.478 Validate MD5 checksum, iteration 2 00:27:21.478 10:54:51 -- ftl/upgrade_shutdown.sh@103 -- # sum=0adda0dfa664a7d16dbb2651c290a3ea 00:27:21.478 10:54:51 -- ftl/upgrade_shutdown.sh@105 -- # [[ 0adda0dfa664a7d16dbb2651c290a3ea != \0\a\d\d\a\0\d\f\a\6\6\4\a\7\d\1\6\d\b\b\2\6\5\1\c\2\9\0\a\3\e\a ]] 00:27:21.478 10:54:51 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:21.478 10:54:51 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:21.478 10:54:51 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:21.478 10:54:51 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:21.478 10:54:51 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:21.478 10:54:51 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:21.478 10:54:51 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:21.478 10:54:51 -- ftl/common.sh@154 -- # return 0 00:27:21.478 10:54:51 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:21.478 [2024-12-03 10:54:51.879107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:21.478 [2024-12-03 10:54:51.879359] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79723 ] 00:27:21.478 [2024-12-03 10:54:52.025917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.737 [2024-12-03 10:54:52.187704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.111  [2024-12-03T10:54:54.294Z] Copying: 660/1024 [MB] (660 MBps) [2024-12-03T10:54:57.588Z] Copying: 1024/1024 [MB] (average 661 MBps) 00:27:26.975 00:27:26.975 10:54:57 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:26.975 10:54:57 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:28.886 10:54:59 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:28.886 10:54:59 -- ftl/upgrade_shutdown.sh@103 -- # sum=0e3b3c6c4b64f8fe8438b6a9c014b08f 00:27:28.886 10:54:59 -- ftl/upgrade_shutdown.sh@105 -- # [[ 0e3b3c6c4b64f8fe8438b6a9c014b08f != \0\e\3\b\3\c\6\c\4\b\6\4\f\8\f\e\8\4\3\8\b\6\a\9\c\0\1\4\b\0\8\f ]] 00:27:28.886 10:54:59 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:28.886 10:54:59 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:28.886 10:54:59 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:28.886 10:54:59 -- ftl/common.sh@137 -- # [[ -n 79611 ]] 00:27:28.886 10:54:59 -- ftl/common.sh@138 -- # kill -9 79611 00:27:28.886 10:54:59 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:28.886 10:54:59 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:28.886 10:54:59 -- ftl/common.sh@81 -- # local base_bdev= 00:27:28.886 10:54:59 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:28.886 10:54:59 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:28.886 10:54:59 -- ftl/common.sh@89 -- # spdk_tgt_pid=79805 00:27:28.886 10:54:59 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:28.886 10:54:59 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:28.886 10:54:59 -- ftl/common.sh@91 -- # waitforlisten 79805 00:27:28.886 10:54:59 -- common/autotest_common.sh@829 -- # '[' -z 79805 ']' 00:27:28.886 10:54:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.886 10:54:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:28.886 10:54:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.886 10:54:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:28.886 10:54:59 -- common/autotest_common.sh@10 -- # set +x 00:27:28.886 [2024-12-03 10:54:59.409173] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:28.886 [2024-12-03 10:54:59.409287] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79805 ] 00:27:28.886 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 79611 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:29.146 [2024-12-03 10:54:59.546291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.146 [2024-12-03 10:54:59.682103] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:29.146 [2024-12-03 10:54:59.682246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.717 [2024-12-03 10:55:00.210655] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:29.717 [2024-12-03 10:55:00.210704] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:29.981 [2024-12-03 10:55:00.346959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.981 [2024-12-03 10:55:00.346991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:29.981 [2024-12-03 10:55:00.347001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:29.981 [2024-12-03 10:55:00.347007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.981 [2024-12-03 10:55:00.347045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.981 [2024-12-03 10:55:00.347063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:29.981 [2024-12-03 10:55:00.347070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:29.981 [2024-12-03 10:55:00.347075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.981 [2024-12-03 10:55:00.347089] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:29.981 [2024-12-03 10:55:00.347653] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:29.981 [2024-12-03 10:55:00.347669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.981 [2024-12-03 10:55:00.347675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:29.981 [2024-12-03 10:55:00.347682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.583 ms 00:27:29.981 [2024-12-03 10:55:00.347687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.981 [2024-12-03 10:55:00.347983] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:29.981 [2024-12-03 10:55:00.360549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.981 [2024-12-03 10:55:00.360575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:29.981 [2024-12-03 10:55:00.360584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.566 ms 00:27:29.981 [2024-12-03 10:55:00.360591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.981 [2024-12-03 10:55:00.367456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.981 [2024-12-03 10:55:00.367479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:29.981 [2024-12-03 10:55:00.367486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:29.981 [2024-12-03 10:55:00.367492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.981 [2024-12-03 10:55:00.367732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.981 [2024-12-03 10:55:00.367745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:29.981 [2024-12-03 10:55:00.367751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:27:29.981 [2024-12-03 10:55:00.367757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.981 [2024-12-03 10:55:00.367785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.981 [2024-12-03 10:55:00.367791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:29.981 [2024-12-03 10:55:00.367797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:29.981 [2024-12-03 10:55:00.367804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.981 [2024-12-03 10:55:00.367821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.981 [2024-12-03 10:55:00.367827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:29.981 [2024-12-03 10:55:00.367833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:29.981 [2024-12-03 10:55:00.367838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.981 [2024-12-03 10:55:00.367857] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:29.981 [2024-12-03 10:55:00.370247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.981 [2024-12-03 10:55:00.370266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:29.981 [2024-12-03 10:55:00.370272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.397 ms 00:27:29.981 [2024-12-03 10:55:00.370278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.981 [2024-12-03 10:55:00.370297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.981 [2024-12-03 10:55:00.370305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:29.981 [2024-12-03 10:55:00.370312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:29.981 [2024-12-03 10:55:00.370317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.981 [2024-12-03 10:55:00.370333] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:29.981 [2024-12-03 10:55:00.370346] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:29.981 [2024-12-03 10:55:00.370370] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:29.981 [2024-12-03 10:55:00.370381] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:29.981 [2024-12-03 10:55:00.370436] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:29.981 [2024-12-03 10:55:00.370445] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:29.981 [2024-12-03 10:55:00.370454] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:29.981 [2024-12-03 10:55:00.370462] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:29.981 [2024-12-03 10:55:00.370468] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:29.981 [2024-12-03 10:55:00.370474] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:29.981 [2024-12-03 10:55:00.370479] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:29.981 [2024-12-03 10:55:00.370484] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:29.981 [2024-12-03 10:55:00.370490] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:29.981 [2024-12-03 10:55:00.370495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.981 [2024-12-03 10:55:00.370501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:29.981 [2024-12-03 10:55:00.370507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.164 ms 00:27:29.981 [2024-12-03 10:55:00.370513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.981 [2024-12-03 10:55:00.370560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.981 [2024-12-03 10:55:00.370566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:29.981 [2024-12-03 10:55:00.370572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:29.981 [2024-12-03 10:55:00.370576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.981 [2024-12-03 10:55:00.370631] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:29.981 [2024-12-03 10:55:00.370639] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:29.981 [2024-12-03 10:55:00.370645] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:29.981 [2024-12-03 10:55:00.370651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:29.981 [2024-12-03 10:55:00.370658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:29.981 [2024-12-03 10:55:00.370663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:29.981 [2024-12-03 10:55:00.370669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:29.981 [2024-12-03 10:55:00.370674] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:29.981 [2024-12-03 10:55:00.370679] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:29.981 [2024-12-03 10:55:00.370684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:29.981 [2024-12-03 10:55:00.370689] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:29.981 [2024-12-03 10:55:00.370694] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:29.981 [2024-12-03 10:55:00.370701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:29.982 [2024-12-03 10:55:00.370707] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:29.982 [2024-12-03 10:55:00.370712] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:29.982 [2024-12-03 10:55:00.370717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:29.982 [2024-12-03 10:55:00.370722] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:29.982 [2024-12-03 10:55:00.370727] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:29.982 [2024-12-03 10:55:00.370732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:29.982 [2024-12-03 10:55:00.370736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:29.982 [2024-12-03 10:55:00.370741] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:29.982 [2024-12-03 10:55:00.370746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:29.982 [2024-12-03 10:55:00.370751] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:29.982 [2024-12-03 10:55:00.370756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:29.982 [2024-12-03 10:55:00.370761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:29.982 [2024-12-03 10:55:00.370766] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:29.982 [2024-12-03 10:55:00.370771] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:29.982 [2024-12-03 10:55:00.370775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:29.982 [2024-12-03 10:55:00.370781] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:29.982 [2024-12-03 10:55:00.370785] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:29.982 [2024-12-03 10:55:00.370790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:29.982 [2024-12-03 10:55:00.370794] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:29.982 [2024-12-03 10:55:00.370799] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:29.982 [2024-12-03 10:55:00.370804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:29.982 [2024-12-03 10:55:00.370809] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:29.982 [2024-12-03 10:55:00.370813] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:29.982 [2024-12-03 10:55:00.370818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:29.982 [2024-12-03 10:55:00.370823] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:29.982 [2024-12-03 10:55:00.370828] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:29.982 [2024-12-03 10:55:00.370832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:29.982 [2024-12-03 10:55:00.370837] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:29.982 [2024-12-03 10:55:00.370842] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:29.982 [2024-12-03 10:55:00.370847] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:29.982 [2024-12-03 10:55:00.370852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:29.982 [2024-12-03 10:55:00.370860] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:29.982 [2024-12-03 10:55:00.370865] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:29.982 [2024-12-03 10:55:00.370870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:29.982 [2024-12-03 10:55:00.370875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:29.982 [2024-12-03 10:55:00.370880] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:29.982 [2024-12-03 10:55:00.370885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:29.982 [2024-12-03 10:55:00.370890] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:29.982 [2024-12-03 10:55:00.370897] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:29.982 [2024-12-03 10:55:00.370903] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:29.982 [2024-12-03 10:55:00.370908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:29.982 [2024-12-03 10:55:00.370914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:29.982 [2024-12-03 10:55:00.370923] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:29.982 [2024-12-03 10:55:00.370929] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:29.982 [2024-12-03 10:55:00.370934] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:29.982 [2024-12-03 10:55:00.370940] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:29.982 [2024-12-03 10:55:00.370945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:29.982 [2024-12-03 10:55:00.370950] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:29.982 [2024-12-03 10:55:00.370955] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:29.982 [2024-12-03 10:55:00.370960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:29.982 [2024-12-03 10:55:00.370966] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:29.982 [2024-12-03 10:55:00.370971] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:29.982 [2024-12-03 10:55:00.370976] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:29.982 [2024-12-03 10:55:00.370982] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:29.982 [2024-12-03 10:55:00.370988] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:29.982 [2024-12-03 10:55:00.370994] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:29.982 [2024-12-03 10:55:00.370999] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:29.982 [2024-12-03 10:55:00.371004] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:29.982 [2024-12-03 10:55:00.371010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.982 [2024-12-03 10:55:00.371015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:29.982 [2024-12-03 10:55:00.371021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.411 ms 00:27:29.982 [2024-12-03 10:55:00.371028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.982 [2024-12-03 10:55:00.381383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.982 [2024-12-03 10:55:00.381405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:29.982 [2024-12-03 10:55:00.381414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.308 ms 00:27:29.982 [2024-12-03 10:55:00.381420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.982 [2024-12-03 10:55:00.381447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.982 [2024-12-03 10:55:00.381452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:29.982 [2024-12-03 10:55:00.381458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:29.982 [2024-12-03 10:55:00.381464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.982 [2024-12-03 10:55:00.405256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.982 [2024-12-03 10:55:00.405279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:29.982 [2024-12-03 10:55:00.405287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.759 ms 00:27:29.982 [2024-12-03 10:55:00.405293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.982 [2024-12-03 10:55:00.405314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.982 [2024-12-03 10:55:00.405321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:29.982 [2024-12-03 10:55:00.405328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:29.982 [2024-12-03 10:55:00.405334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.982 [2024-12-03 10:55:00.405398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.983 [2024-12-03 10:55:00.405406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:29.983 [2024-12-03 10:55:00.405412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:29.983 [2024-12-03 10:55:00.405418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.983 [2024-12-03 10:55:00.405445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.983 [2024-12-03 10:55:00.405453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:29.983 [2024-12-03 10:55:00.405462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:29.983 [2024-12-03 10:55:00.405468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.983 [2024-12-03 10:55:00.417337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.983 [2024-12-03 10:55:00.417359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:29.983 [2024-12-03 10:55:00.417366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.854 ms 00:27:29.983 [2024-12-03 10:55:00.417373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.983 [2024-12-03 10:55:00.417439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.983 [2024-12-03 10:55:00.417446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:29.983 [2024-12-03 10:55:00.417452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:29.983 [2024-12-03 10:55:00.417458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.983 [2024-12-03 10:55:00.430028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.983 [2024-12-03 10:55:00.430051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:29.983 [2024-12-03 10:55:00.430067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.557 ms 00:27:29.983 [2024-12-03 10:55:00.430072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.983 [2024-12-03 10:55:00.436995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.983 [2024-12-03 10:55:00.437016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:29.983 [2024-12-03 10:55:00.437024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.205 ms 00:27:29.983 [2024-12-03 10:55:00.437030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.983 [2024-12-03 10:55:00.481947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.983 [2024-12-03 10:55:00.481976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:29.983 [2024-12-03 10:55:00.481985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 44.865 ms 00:27:29.983 [2024-12-03 10:55:00.481991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.983 [2024-12-03 10:55:00.482062] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:29.983 [2024-12-03 10:55:00.482096] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:29.983 [2024-12-03 10:55:00.482125] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:29.983 [2024-12-03 10:55:00.482154] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:29.983 [2024-12-03 10:55:00.482159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.983 [2024-12-03 10:55:00.482165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:29.983 [2024-12-03 10:55:00.482173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.135 ms 00:27:29.983 [2024-12-03 10:55:00.482180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.983 [2024-12-03 10:55:00.482217] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:29.983 [2024-12-03 10:55:00.482225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.983 [2024-12-03 10:55:00.482230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:29.983 [2024-12-03 10:55:00.482236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:29.983 [2024-12-03 10:55:00.482241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.983 [2024-12-03 10:55:00.493599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.983 [2024-12-03 10:55:00.493623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:29.983 [2024-12-03 10:55:00.493630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.342 ms 00:27:29.983 [2024-12-03 10:55:00.493637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.983 [2024-12-03 10:55:00.499894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.983 [2024-12-03 10:55:00.499915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:29.983 [2024-12-03 10:55:00.499922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:29.983 [2024-12-03 10:55:00.499928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.983 [2024-12-03 10:55:00.499966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:29.983 [2024-12-03 10:55:00.499973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:27:29.983 [2024-12-03 10:55:00.499979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:29.983 [2024-12-03 10:55:00.499984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:29.983 [2024-12-03 10:55:00.500107] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:27:30.927 [2024-12-03 10:55:01.197355] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:27:30.927 [2024-12-03 10:55:01.197506] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:27:31.495 [2024-12-03 10:55:01.841912] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:27:31.495 [2024-12-03 10:55:01.841986] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:31.495 [2024-12-03 10:55:01.841996] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:31.495 [2024-12-03 10:55:01.842004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.495 [2024-12-03 10:55:01.842012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:31.495 [2024-12-03 10:55:01.842022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1342.001 ms 00:27:31.495 [2024-12-03 10:55:01.842028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.495 [2024-12-03 10:55:01.842070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.495 [2024-12-03 10:55:01.842078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:31.495 [2024-12-03 10:55:01.842085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:31.495 [2024-12-03 10:55:01.842091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.495 [2024-12-03 10:55:01.850767] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:31.495 [2024-12-03 10:55:01.850855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.495 [2024-12-03 10:55:01.850863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:31.495 [2024-12-03 10:55:01.850871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.750 ms 00:27:31.495 [2024-12-03 10:55:01.850877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.495 [2024-12-03 10:55:01.851411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.495 [2024-12-03 10:55:01.851524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:27:31.495 [2024-12-03 10:55:01.851536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.483 ms 00:27:31.495 [2024-12-03 10:55:01.851543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.495 [2024-12-03 10:55:01.853241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.495 [2024-12-03 10:55:01.853258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:31.495 [2024-12-03 10:55:01.853266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.681 ms 00:27:31.495 [2024-12-03 10:55:01.853272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.495 [2024-12-03 10:55:01.871448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.495 [2024-12-03 10:55:01.871477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:27:31.495 [2024-12-03 10:55:01.871485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.157 ms 00:27:31.495 [2024-12-03 10:55:01.871491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.495 [2024-12-03 10:55:01.871563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.495 [2024-12-03 10:55:01.871572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:31.495 [2024-12-03 10:55:01.871578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:31.495 [2024-12-03 10:55:01.871584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.495 [2024-12-03 10:55:01.872558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.495 [2024-12-03 10:55:01.872587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:31.495 [2024-12-03 10:55:01.872594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.961 ms 00:27:31.495 [2024-12-03 10:55:01.872599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.495 [2024-12-03 10:55:01.872618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.495 [2024-12-03 10:55:01.872625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:31.495 [2024-12-03 10:55:01.872631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:31.495 [2024-12-03 10:55:01.872637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.495 [2024-12-03 10:55:01.872662] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:31.495 [2024-12-03 10:55:01.872670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.495 [2024-12-03 10:55:01.872675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:31.495 [2024-12-03 10:55:01.872683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:31.495 [2024-12-03 10:55:01.872688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.496 [2024-12-03 10:55:01.872729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.496 [2024-12-03 10:55:01.872735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:31.496 [2024-12-03 10:55:01.872741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:31.496 [2024-12-03 10:55:01.872747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.496 [2024-12-03 10:55:01.873442] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1526.156 ms, result 0 00:27:31.496 [2024-12-03 10:55:01.887703] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.496 [2024-12-03 10:55:01.903695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:31.496 [2024-12-03 10:55:01.911789] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:31.755 Validate MD5 checksum, iteration 1 00:27:31.755 10:55:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:31.755 10:55:02 -- common/autotest_common.sh@862 -- # return 0 00:27:31.755 10:55:02 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:31.755 10:55:02 -- ftl/common.sh@95 -- # return 0 00:27:31.755 10:55:02 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:31.755 10:55:02 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:31.755 10:55:02 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:31.755 10:55:02 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:31.755 10:55:02 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:31.755 10:55:02 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:31.755 10:55:02 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:31.755 10:55:02 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:31.755 10:55:02 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:31.755 10:55:02 -- ftl/common.sh@154 -- # return 0 00:27:31.755 10:55:02 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:31.755 [2024-12-03 10:55:02.312477] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:31.755 [2024-12-03 10:55:02.312747] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79845 ] 00:27:32.014 [2024-12-03 10:55:02.460419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.273 [2024-12-03 10:55:02.631281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.657  [2024-12-03T10:55:04.843Z] Copying: 636/1024 [MB] (636 MBps) [2024-12-03T10:55:05.779Z] Copying: 1024/1024 [MB] (average 640 MBps) 00:27:35.166 00:27:35.166 10:55:05 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:35.166 10:55:05 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:37.700 10:55:07 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:37.700 10:55:07 -- ftl/upgrade_shutdown.sh@103 -- # sum=0adda0dfa664a7d16dbb2651c290a3ea 00:27:37.700 10:55:07 -- ftl/upgrade_shutdown.sh@105 -- # [[ 0adda0dfa664a7d16dbb2651c290a3ea != \0\a\d\d\a\0\d\f\a\6\6\4\a\7\d\1\6\d\b\b\2\6\5\1\c\2\9\0\a\3\e\a ]] 00:27:37.700 10:55:07 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:37.700 10:55:07 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:37.700 10:55:07 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:37.700 Validate MD5 checksum, iteration 2 00:27:37.700 10:55:07 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:37.700 10:55:07 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:37.700 10:55:07 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:37.700 10:55:07 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:37.700 10:55:07 -- ftl/common.sh@154 -- # return 0 00:27:37.700 10:55:07 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:37.700 [2024-12-03 10:55:07.858985] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:37.700 [2024-12-03 10:55:07.859220] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79907 ] 00:27:37.700 [2024-12-03 10:55:08.005598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.700 [2024-12-03 10:55:08.145454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.086  [2024-12-03T10:55:10.271Z] Copying: 650/1024 [MB] (650 MBps) [2024-12-03T10:55:11.215Z] Copying: 1024/1024 [MB] (average 654 MBps) 00:27:40.602 00:27:40.602 10:55:11 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:40.602 10:55:11 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:42.507 10:55:12 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:42.507 10:55:12 -- ftl/upgrade_shutdown.sh@103 -- # sum=0e3b3c6c4b64f8fe8438b6a9c014b08f 00:27:42.507 10:55:12 -- ftl/upgrade_shutdown.sh@105 -- # [[ 0e3b3c6c4b64f8fe8438b6a9c014b08f != \0\e\3\b\3\c\6\c\4\b\6\4\f\8\f\e\8\4\3\8\b\6\a\9\c\0\1\4\b\0\8\f ]] 00:27:42.507 10:55:12 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:42.507 10:55:12 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:42.507 10:55:12 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:42.507 10:55:12 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:42.507 10:55:12 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:42.507 10:55:12 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:42.507 10:55:12 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:42.507 10:55:12 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:42.507 10:55:12 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:42.507 10:55:12 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:42.507 10:55:12 -- ftl/common.sh@130 -- # [[ -n 79805 ]] 00:27:42.507 10:55:12 -- ftl/common.sh@131 -- # killprocess 79805 00:27:42.507 10:55:12 -- common/autotest_common.sh@936 -- # '[' -z 79805 ']' 00:27:42.507 10:55:12 -- common/autotest_common.sh@940 -- # kill -0 79805 00:27:42.507 10:55:12 -- common/autotest_common.sh@941 -- # uname 00:27:42.507 10:55:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:42.507 10:55:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79805 00:27:42.507 killing process with pid 79805 00:27:42.507 10:55:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:42.507 10:55:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:42.507 10:55:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79805' 00:27:42.507 10:55:12 -- common/autotest_common.sh@955 -- # kill 79805 00:27:42.507 10:55:12 -- common/autotest_common.sh@960 -- # wait 79805 00:27:43.076 [2024-12-03 10:55:13.403330] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:43.076 [2024-12-03 10:55:13.415456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.076 [2024-12-03 10:55:13.415494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:43.076 [2024-12-03 10:55:13.415506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:43.076 [2024-12-03 10:55:13.415513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.076 [2024-12-03 10:55:13.415530] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:43.076 [2024-12-03 10:55:13.417633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.076 [2024-12-03 10:55:13.417664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:43.076 [2024-12-03 10:55:13.417673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.091 ms 00:27:43.076 [2024-12-03 10:55:13.417679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.076 [2024-12-03 10:55:13.417867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.076 [2024-12-03 10:55:13.417879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:43.076 [2024-12-03 10:55:13.417886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.169 ms 00:27:43.076 [2024-12-03 10:55:13.417892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.076 [2024-12-03 10:55:13.420784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.076 [2024-12-03 10:55:13.420816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:43.076 [2024-12-03 10:55:13.420824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.879 ms 00:27:43.076 [2024-12-03 10:55:13.420831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.076 [2024-12-03 10:55:13.421697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.076 [2024-12-03 10:55:13.421720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:43.076 [2024-12-03 10:55:13.421727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.838 ms 00:27:43.076 [2024-12-03 10:55:13.421734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.076 [2024-12-03 10:55:13.430615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.076 [2024-12-03 10:55:13.430644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:43.076 [2024-12-03 10:55:13.430652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.843 ms 00:27:43.076 [2024-12-03 10:55:13.430659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.076 [2024-12-03 10:55:13.435193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.076 [2024-12-03 10:55:13.435221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:43.076 [2024-12-03 10:55:13.435230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.506 ms 00:27:43.076 [2024-12-03 10:55:13.435237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.076 [2024-12-03 10:55:13.435306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.076 [2024-12-03 10:55:13.435330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:43.076 [2024-12-03 10:55:13.435337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:43.076 [2024-12-03 10:55:13.435343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.076 [2024-12-03 10:55:13.443226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.076 [2024-12-03 10:55:13.443252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:43.076 [2024-12-03 10:55:13.443259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.870 ms 00:27:43.076 [2024-12-03 10:55:13.443265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.076 [2024-12-03 10:55:13.451078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.076 [2024-12-03 10:55:13.451102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:43.076 [2024-12-03 10:55:13.451109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.788 ms 00:27:43.076 [2024-12-03 10:55:13.451115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.076 [2024-12-03 10:55:13.458746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.076 [2024-12-03 10:55:13.458895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:43.076 [2024-12-03 10:55:13.458908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.606 ms 00:27:43.076 [2024-12-03 10:55:13.458913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.076 [2024-12-03 10:55:13.466726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.076 [2024-12-03 10:55:13.466751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:43.076 [2024-12-03 10:55:13.466758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.766 ms 00:27:43.076 [2024-12-03 10:55:13.466763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.076 [2024-12-03 10:55:13.466788] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:43.076 [2024-12-03 10:55:13.466800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:43.076 [2024-12-03 10:55:13.466812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:43.076 [2024-12-03 10:55:13.466819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:43.076 [2024-12-03 10:55:13.466825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:43.076 [2024-12-03 10:55:13.466923] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:43.076 [2024-12-03 10:55:13.466929] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 1527cf0e-44c7-4fd4-a655-126b687d8e76 00:27:43.076 [2024-12-03 10:55:13.466936] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:43.076 [2024-12-03 10:55:13.466942] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:43.076 [2024-12-03 10:55:13.466947] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:43.076 [2024-12-03 10:55:13.466953] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:43.076 [2024-12-03 10:55:13.466959] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:43.076 [2024-12-03 10:55:13.466966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:43.076 [2024-12-03 10:55:13.466971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:43.076 [2024-12-03 10:55:13.466977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:43.076 [2024-12-03 10:55:13.466982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:43.076 [2024-12-03 10:55:13.466989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.076 [2024-12-03 10:55:13.466996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:43.076 [2024-12-03 10:55:13.467002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:27:43.076 [2024-12-03 10:55:13.467010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.477135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.077 [2024-12-03 10:55:13.477236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:43.077 [2024-12-03 10:55:13.477249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.101 ms 00:27:43.077 [2024-12-03 10:55:13.477255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.477418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.077 [2024-12-03 10:55:13.477427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:43.077 [2024-12-03 10:55:13.477437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.146 ms 00:27:43.077 [2024-12-03 10:55:13.477443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.514486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.077 [2024-12-03 10:55:13.514512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:43.077 [2024-12-03 10:55:13.514521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.077 [2024-12-03 10:55:13.514527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.514553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.077 [2024-12-03 10:55:13.514561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:43.077 [2024-12-03 10:55:13.514572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.077 [2024-12-03 10:55:13.514578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.514634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.077 [2024-12-03 10:55:13.514642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:43.077 [2024-12-03 10:55:13.514648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.077 [2024-12-03 10:55:13.514655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.514668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.077 [2024-12-03 10:55:13.514675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:43.077 [2024-12-03 10:55:13.514681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.077 [2024-12-03 10:55:13.514689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.577199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.077 [2024-12-03 10:55:13.577236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:43.077 [2024-12-03 10:55:13.577245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.077 [2024-12-03 10:55:13.577253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.600900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.077 [2024-12-03 10:55:13.601043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:43.077 [2024-12-03 10:55:13.601073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.077 [2024-12-03 10:55:13.601080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.601132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.077 [2024-12-03 10:55:13.601139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:43.077 [2024-12-03 10:55:13.601146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.077 [2024-12-03 10:55:13.601152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.601185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.077 [2024-12-03 10:55:13.601192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:43.077 [2024-12-03 10:55:13.601199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.077 [2024-12-03 10:55:13.601206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.601287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.077 [2024-12-03 10:55:13.601296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:43.077 [2024-12-03 10:55:13.601303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.077 [2024-12-03 10:55:13.601309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.601337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.077 [2024-12-03 10:55:13.601345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:43.077 [2024-12-03 10:55:13.601355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.077 [2024-12-03 10:55:13.601363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.601398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.077 [2024-12-03 10:55:13.601407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:43.077 [2024-12-03 10:55:13.601413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.077 [2024-12-03 10:55:13.601419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.601459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.077 [2024-12-03 10:55:13.601466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:43.077 [2024-12-03 10:55:13.601472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.077 [2024-12-03 10:55:13.601478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.077 [2024-12-03 10:55:13.601592] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 186.108 ms, result 0 00:27:44.017 10:55:14 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:44.017 10:55:14 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:44.017 10:55:14 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:44.017 10:55:14 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:44.017 10:55:14 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:44.017 10:55:14 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:44.017 Remove shared memory files 00:27:44.017 10:55:14 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:44.017 10:55:14 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:44.017 10:55:14 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:44.017 10:55:14 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:44.017 10:55:14 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79611 00:27:44.017 10:55:14 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:44.017 10:55:14 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:44.017 ************************************ 00:27:44.017 END TEST ftl_upgrade_shutdown 00:27:44.017 ************************************ 00:27:44.017 00:27:44.017 real 1m17.611s 00:27:44.017 user 1m49.685s 00:27:44.017 sys 0m19.144s 00:27:44.017 10:55:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:44.017 10:55:14 -- common/autotest_common.sh@10 -- # set +x 00:27:44.017 10:55:14 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:27:44.017 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:27:44.017 10:55:14 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:27:44.017 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:27:44.017 10:55:14 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:44.017 10:55:14 -- ftl/ftl.sh@14 -- # killprocess 70585 00:27:44.017 10:55:14 -- common/autotest_common.sh@936 -- # '[' -z 70585 ']' 00:27:44.017 10:55:14 -- common/autotest_common.sh@940 -- # kill -0 70585 00:27:44.017 Process with pid 70585 is not found 00:27:44.017 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70585) - No such process 00:27:44.017 10:55:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70585 is not found' 00:27:44.017 10:55:14 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:27:44.017 10:55:14 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=80010 00:27:44.017 10:55:14 -- ftl/ftl.sh@20 -- # waitforlisten 80010 00:27:44.017 10:55:14 -- common/autotest_common.sh@829 -- # '[' -z 80010 ']' 00:27:44.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.017 10:55:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.017 10:55:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:44.017 10:55:14 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:44.017 10:55:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.017 10:55:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:44.017 10:55:14 -- common/autotest_common.sh@10 -- # set +x 00:27:44.017 [2024-12-03 10:55:14.422149] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:44.017 [2024-12-03 10:55:14.422405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80010 ] 00:27:44.017 [2024-12-03 10:55:14.571726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.279 [2024-12-03 10:55:14.818902] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:44.279 [2024-12-03 10:55:14.819396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.666 10:55:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:45.667 10:55:15 -- common/autotest_common.sh@862 -- # return 0 00:27:45.667 10:55:15 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:27:45.667 nvme0n1 00:27:45.667 10:55:16 -- ftl/ftl.sh@22 -- # clear_lvols 00:27:45.667 10:55:16 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:45.667 10:55:16 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:45.927 10:55:16 -- ftl/common.sh@28 -- # stores=bbd4e6b4-90b7-46c8-8ce1-61caf8a889ce 00:27:45.927 10:55:16 -- ftl/common.sh@29 -- # for lvs in $stores 00:27:45.927 10:55:16 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bbd4e6b4-90b7-46c8-8ce1-61caf8a889ce 00:27:45.927 10:55:16 -- ftl/ftl.sh@23 -- # killprocess 80010 00:27:45.927 10:55:16 -- common/autotest_common.sh@936 -- # '[' -z 80010 ']' 00:27:45.927 10:55:16 -- common/autotest_common.sh@940 -- # kill -0 80010 00:27:45.927 10:55:16 -- common/autotest_common.sh@941 -- # uname 00:27:45.927 10:55:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:45.927 10:55:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80010 00:27:46.188 killing process with pid 80010 00:27:46.189 10:55:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:46.189 10:55:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:46.189 10:55:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80010' 00:27:46.189 10:55:16 -- common/autotest_common.sh@955 -- # kill 80010 00:27:46.189 10:55:16 -- common/autotest_common.sh@960 -- # wait 80010 00:27:47.623 10:55:17 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:47.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:47.623 Waiting for block devices as requested 00:27:47.623 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.623 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.623 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.941 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:53.235 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:27:53.235 10:55:23 -- ftl/ftl.sh@28 -- # remove_shm 00:27:53.235 Remove shared memory files 00:27:53.235 10:55:23 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:53.235 10:55:23 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:53.235 10:55:23 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:53.235 10:55:23 -- ftl/common.sh@207 -- # rm -f rm -f 00:27:53.235 10:55:23 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:53.235 10:55:23 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:53.235 00:27:53.235 real 13m45.539s 00:27:53.235 user 16m2.859s 00:27:53.235 sys 1m11.826s 00:27:53.235 ************************************ 00:27:53.235 END TEST ftl 00:27:53.235 ************************************ 00:27:53.235 10:55:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:53.235 10:55:23 -- common/autotest_common.sh@10 -- # set +x 00:27:53.235 10:55:23 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:53.235 10:55:23 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:53.235 10:55:23 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:53.235 10:55:23 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:53.235 10:55:23 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:53.235 10:55:23 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:53.235 10:55:23 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:53.235 10:55:23 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:53.235 10:55:23 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:53.235 10:55:23 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:53.235 10:55:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:53.235 10:55:23 -- common/autotest_common.sh@10 -- # set +x 00:27:53.235 10:55:23 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:53.235 10:55:23 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:53.235 10:55:23 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:53.235 10:55:23 -- common/autotest_common.sh@10 -- # set +x 00:27:54.177 INFO: APP EXITING 00:27:54.177 INFO: killing all VMs 00:27:54.177 INFO: killing vhost app 00:27:54.177 INFO: EXIT DONE 00:27:54.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:55.007 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:27:55.007 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:27:55.007 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:55.007 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:55.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:55.576 Cleaning 00:27:55.576 Removing: /var/run/dpdk/spdk0/config 00:27:55.838 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:55.838 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:55.838 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:55.838 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:55.838 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:55.838 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:55.838 Removing: /var/run/dpdk/spdk0 00:27:55.838 Removing: /var/run/dpdk/spdk_pid55988 00:27:55.838 Removing: /var/run/dpdk/spdk_pid56184 00:27:55.838 Removing: /var/run/dpdk/spdk_pid56496 00:27:55.838 Removing: /var/run/dpdk/spdk_pid56596 00:27:55.838 Removing: /var/run/dpdk/spdk_pid56691 00:27:55.838 Removing: /var/run/dpdk/spdk_pid56790 00:27:55.838 Removing: /var/run/dpdk/spdk_pid56875 00:27:55.838 Removing: /var/run/dpdk/spdk_pid56920 00:27:55.838 Removing: /var/run/dpdk/spdk_pid56962 00:27:55.838 Removing: /var/run/dpdk/spdk_pid57026 00:27:55.838 Removing: /var/run/dpdk/spdk_pid57132 00:27:55.838 Removing: /var/run/dpdk/spdk_pid57556 00:27:55.838 Removing: /var/run/dpdk/spdk_pid57615 00:27:55.838 Removing: /var/run/dpdk/spdk_pid57667 00:27:55.838 Removing: /var/run/dpdk/spdk_pid57683 00:27:55.838 Removing: /var/run/dpdk/spdk_pid57781 00:27:55.838 Removing: /var/run/dpdk/spdk_pid57792 00:27:55.838 Removing: /var/run/dpdk/spdk_pid57896 00:27:55.838 Removing: /var/run/dpdk/spdk_pid57914 00:27:55.838 Removing: /var/run/dpdk/spdk_pid57967 00:27:55.838 Removing: /var/run/dpdk/spdk_pid57998 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58051 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58076 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58232 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58274 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58362 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58421 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58452 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58519 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58545 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58586 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58612 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58649 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58674 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58709 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58735 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58776 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58802 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58843 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58866 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58905 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58933 00:27:55.838 Removing: /var/run/dpdk/spdk_pid58974 00:27:55.838 Removing: /var/run/dpdk/spdk_pid59000 00:27:55.838 Removing: /var/run/dpdk/spdk_pid59042 00:27:55.838 Removing: /var/run/dpdk/spdk_pid59070 00:27:55.838 Removing: /var/run/dpdk/spdk_pid59116 00:27:55.838 Removing: /var/run/dpdk/spdk_pid59147 00:27:55.838 Removing: /var/run/dpdk/spdk_pid59189 00:27:55.838 Removing: /var/run/dpdk/spdk_pid59209 00:27:55.838 Removing: /var/run/dpdk/spdk_pid59255 00:27:55.838 Removing: /var/run/dpdk/spdk_pid59271 00:27:55.838 Removing: /var/run/dpdk/spdk_pid59312 00:27:55.838 Removing: /var/run/dpdk/spdk_pid59338 00:27:55.838 Removing: /var/run/dpdk/spdk_pid59379 00:27:55.838 Removing: /var/run/dpdk/spdk_pid59405 00:27:55.838 Removing: /var/run/dpdk/spdk_pid59446 00:27:55.839 Removing: /var/run/dpdk/spdk_pid59472 00:27:55.839 Removing: /var/run/dpdk/spdk_pid59513 00:27:55.839 Removing: /var/run/dpdk/spdk_pid59539 00:27:55.839 Removing: /var/run/dpdk/spdk_pid59580 00:27:55.839 Removing: /var/run/dpdk/spdk_pid59609 00:27:55.839 Removing: /var/run/dpdk/spdk_pid59659 00:27:55.839 Removing: /var/run/dpdk/spdk_pid59691 00:27:55.839 Removing: /var/run/dpdk/spdk_pid59735 00:27:55.839 Removing: /var/run/dpdk/spdk_pid59761 00:27:55.839 Removing: /var/run/dpdk/spdk_pid59802 00:27:55.839 Removing: /var/run/dpdk/spdk_pid59828 00:27:55.839 Removing: /var/run/dpdk/spdk_pid59870 00:27:55.839 Removing: /var/run/dpdk/spdk_pid59953 00:27:55.839 Removing: /var/run/dpdk/spdk_pid60071 00:27:55.839 Removing: /var/run/dpdk/spdk_pid60254 00:27:55.839 Removing: /var/run/dpdk/spdk_pid60340 00:27:55.839 Removing: /var/run/dpdk/spdk_pid60382 00:27:55.839 Removing: /var/run/dpdk/spdk_pid60824 00:27:55.839 Removing: /var/run/dpdk/spdk_pid61028 00:27:55.839 Removing: /var/run/dpdk/spdk_pid61137 00:27:55.839 Removing: /var/run/dpdk/spdk_pid61190 00:27:55.839 Removing: /var/run/dpdk/spdk_pid61221 00:27:55.839 Removing: /var/run/dpdk/spdk_pid61304 00:27:55.839 Removing: /var/run/dpdk/spdk_pid61964 00:27:55.839 Removing: /var/run/dpdk/spdk_pid62006 00:27:55.839 Removing: /var/run/dpdk/spdk_pid62509 00:27:55.839 Removing: /var/run/dpdk/spdk_pid62639 00:27:55.839 Removing: /var/run/dpdk/spdk_pid62750 00:27:55.839 Removing: /var/run/dpdk/spdk_pid62803 00:27:55.839 Removing: /var/run/dpdk/spdk_pid62834 00:27:55.839 Removing: /var/run/dpdk/spdk_pid62865 00:27:55.839 Removing: /var/run/dpdk/spdk_pid64802 00:27:55.839 Removing: /var/run/dpdk/spdk_pid64941 00:27:55.839 Removing: /var/run/dpdk/spdk_pid64945 00:27:55.839 Removing: /var/run/dpdk/spdk_pid64968 00:27:55.839 Removing: /var/run/dpdk/spdk_pid65014 00:27:55.839 Removing: /var/run/dpdk/spdk_pid65018 00:27:55.839 Removing: /var/run/dpdk/spdk_pid65030 00:27:55.839 Removing: /var/run/dpdk/spdk_pid65080 00:27:55.839 Removing: /var/run/dpdk/spdk_pid65084 00:27:55.839 Removing: /var/run/dpdk/spdk_pid65096 00:27:55.839 Removing: /var/run/dpdk/spdk_pid65152 00:27:55.839 Removing: /var/run/dpdk/spdk_pid65156 00:27:55.839 Removing: /var/run/dpdk/spdk_pid65172 00:27:55.839 Removing: /var/run/dpdk/spdk_pid66614 00:27:56.099 Removing: /var/run/dpdk/spdk_pid66724 00:27:56.099 Removing: /var/run/dpdk/spdk_pid66857 00:27:56.099 Removing: /var/run/dpdk/spdk_pid66944 00:27:56.099 Removing: /var/run/dpdk/spdk_pid67026 00:27:56.099 Removing: /var/run/dpdk/spdk_pid67102 00:27:56.099 Removing: /var/run/dpdk/spdk_pid67212 00:27:56.099 Removing: /var/run/dpdk/spdk_pid67283 00:27:56.099 Removing: /var/run/dpdk/spdk_pid67428 00:27:56.099 Removing: /var/run/dpdk/spdk_pid67817 00:27:56.099 Removing: /var/run/dpdk/spdk_pid67854 00:27:56.099 Removing: /var/run/dpdk/spdk_pid68300 00:27:56.099 Removing: /var/run/dpdk/spdk_pid68484 00:27:56.099 Removing: /var/run/dpdk/spdk_pid68583 00:27:56.099 Removing: /var/run/dpdk/spdk_pid68698 00:27:56.099 Removing: /var/run/dpdk/spdk_pid68740 00:27:56.099 Removing: /var/run/dpdk/spdk_pid68771 00:27:56.099 Removing: /var/run/dpdk/spdk_pid69086 00:27:56.099 Removing: /var/run/dpdk/spdk_pid69149 00:27:56.099 Removing: /var/run/dpdk/spdk_pid69232 00:27:56.099 Removing: /var/run/dpdk/spdk_pid69623 00:27:56.099 Removing: /var/run/dpdk/spdk_pid69774 00:27:56.099 Removing: /var/run/dpdk/spdk_pid70585 00:27:56.099 Removing: /var/run/dpdk/spdk_pid70716 00:27:56.099 Removing: /var/run/dpdk/spdk_pid70911 00:27:56.099 Removing: /var/run/dpdk/spdk_pid71016 00:27:56.099 Removing: /var/run/dpdk/spdk_pid71329 00:27:56.099 Removing: /var/run/dpdk/spdk_pid71589 00:27:56.099 Removing: /var/run/dpdk/spdk_pid71950 00:27:56.099 Removing: /var/run/dpdk/spdk_pid72135 00:27:56.099 Removing: /var/run/dpdk/spdk_pid72334 00:27:56.099 Removing: /var/run/dpdk/spdk_pid72387 00:27:56.099 Removing: /var/run/dpdk/spdk_pid72645 00:27:56.099 Removing: /var/run/dpdk/spdk_pid72670 00:27:56.099 Removing: /var/run/dpdk/spdk_pid72724 00:27:56.099 Removing: /var/run/dpdk/spdk_pid73048 00:27:56.099 Removing: /var/run/dpdk/spdk_pid73269 00:27:56.099 Removing: /var/run/dpdk/spdk_pid73929 00:27:56.099 Removing: /var/run/dpdk/spdk_pid74618 00:27:56.099 Removing: /var/run/dpdk/spdk_pid75379 00:27:56.099 Removing: /var/run/dpdk/spdk_pid76123 00:27:56.099 Removing: /var/run/dpdk/spdk_pid76288 00:27:56.099 Removing: /var/run/dpdk/spdk_pid76385 00:27:56.099 Removing: /var/run/dpdk/spdk_pid76952 00:27:56.099 Removing: /var/run/dpdk/spdk_pid77016 00:27:56.099 Removing: /var/run/dpdk/spdk_pid77795 00:27:56.099 Removing: /var/run/dpdk/spdk_pid78301 00:27:56.099 Removing: /var/run/dpdk/spdk_pid79053 00:27:56.099 Removing: /var/run/dpdk/spdk_pid79182 00:27:56.099 Removing: /var/run/dpdk/spdk_pid79224 00:27:56.099 Removing: /var/run/dpdk/spdk_pid79292 00:27:56.099 Removing: /var/run/dpdk/spdk_pid79349 00:27:56.099 Removing: /var/run/dpdk/spdk_pid79413 00:27:56.099 Removing: /var/run/dpdk/spdk_pid79611 00:27:56.099 Removing: /var/run/dpdk/spdk_pid79650 00:27:56.099 Removing: /var/run/dpdk/spdk_pid79723 00:27:56.099 Removing: /var/run/dpdk/spdk_pid79805 00:27:56.099 Removing: /var/run/dpdk/spdk_pid79845 00:27:56.099 Removing: /var/run/dpdk/spdk_pid79907 00:27:56.099 Removing: /var/run/dpdk/spdk_pid80010 00:27:56.099 Clean 00:27:56.099 killing process with pid 48171 00:27:56.099 killing process with pid 48176 00:27:56.358 10:55:26 -- common/autotest_common.sh@1446 -- # return 0 00:27:56.358 10:55:26 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:56.358 10:55:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:56.358 10:55:26 -- common/autotest_common.sh@10 -- # set +x 00:27:56.358 10:55:26 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:56.358 10:55:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:56.358 10:55:26 -- common/autotest_common.sh@10 -- # set +x 00:27:56.358 10:55:26 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:56.358 10:55:26 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:56.358 10:55:26 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:56.358 10:55:26 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:56.358 10:55:26 -- spdk/autotest.sh@383 -- # hostname 00:27:56.358 10:55:26 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:56.618 geninfo: WARNING: invalid characters removed from testname! 00:28:23.204 10:55:49 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:23.204 10:55:52 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:24.590 10:55:54 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:25.974 10:55:56 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:27.888 10:55:58 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:30.436 10:56:00 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:32.982 10:56:03 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:32.982 10:56:03 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:28:32.982 10:56:03 -- common/autotest_common.sh@1690 -- $ lcov --version 00:28:32.982 10:56:03 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:28:32.982 10:56:03 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:28:32.982 10:56:03 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:28:32.982 10:56:03 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:28:32.982 10:56:03 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:28:32.982 10:56:03 -- scripts/common.sh@335 -- $ IFS=.-: 00:28:32.982 10:56:03 -- scripts/common.sh@335 -- $ read -ra ver1 00:28:32.982 10:56:03 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:32.982 10:56:03 -- scripts/common.sh@336 -- $ read -ra ver2 00:28:32.982 10:56:03 -- scripts/common.sh@337 -- $ local 'op=<' 00:28:32.982 10:56:03 -- scripts/common.sh@339 -- $ ver1_l=2 00:28:32.982 10:56:03 -- scripts/common.sh@340 -- $ ver2_l=1 00:28:32.982 10:56:03 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:28:32.982 10:56:03 -- scripts/common.sh@343 -- $ case "$op" in 00:28:32.982 10:56:03 -- scripts/common.sh@344 -- $ : 1 00:28:32.982 10:56:03 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:28:32.982 10:56:03 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:32.982 10:56:03 -- scripts/common.sh@364 -- $ decimal 1 00:28:32.982 10:56:03 -- scripts/common.sh@352 -- $ local d=1 00:28:32.982 10:56:03 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:32.982 10:56:03 -- scripts/common.sh@354 -- $ echo 1 00:28:32.982 10:56:03 -- scripts/common.sh@364 -- $ ver1[v]=1 00:28:32.982 10:56:03 -- scripts/common.sh@365 -- $ decimal 2 00:28:32.982 10:56:03 -- scripts/common.sh@352 -- $ local d=2 00:28:32.982 10:56:03 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:32.982 10:56:03 -- scripts/common.sh@354 -- $ echo 2 00:28:32.982 10:56:03 -- scripts/common.sh@365 -- $ ver2[v]=2 00:28:32.982 10:56:03 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:28:32.982 10:56:03 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:28:32.982 10:56:03 -- scripts/common.sh@367 -- $ return 0 00:28:32.982 10:56:03 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:32.982 10:56:03 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:28:32.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.982 --rc genhtml_branch_coverage=1 00:28:32.982 --rc genhtml_function_coverage=1 00:28:32.982 --rc genhtml_legend=1 00:28:32.982 --rc geninfo_all_blocks=1 00:28:32.982 --rc geninfo_unexecuted_blocks=1 00:28:32.982 00:28:32.982 ' 00:28:32.982 10:56:03 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:28:32.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.982 --rc genhtml_branch_coverage=1 00:28:32.982 --rc genhtml_function_coverage=1 00:28:32.982 --rc genhtml_legend=1 00:28:32.982 --rc geninfo_all_blocks=1 00:28:32.982 --rc geninfo_unexecuted_blocks=1 00:28:32.982 00:28:32.982 ' 00:28:32.982 10:56:03 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:28:32.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.982 --rc genhtml_branch_coverage=1 00:28:32.982 --rc genhtml_function_coverage=1 00:28:32.982 --rc genhtml_legend=1 00:28:32.982 --rc geninfo_all_blocks=1 00:28:32.982 --rc geninfo_unexecuted_blocks=1 00:28:32.982 00:28:32.982 ' 00:28:32.982 10:56:03 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:28:32.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.982 --rc genhtml_branch_coverage=1 00:28:32.982 --rc genhtml_function_coverage=1 00:28:32.982 --rc genhtml_legend=1 00:28:32.982 --rc geninfo_all_blocks=1 00:28:32.982 --rc geninfo_unexecuted_blocks=1 00:28:32.982 00:28:32.982 ' 00:28:32.982 10:56:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:32.982 10:56:03 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:32.982 10:56:03 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.982 10:56:03 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.983 10:56:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.983 10:56:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.983 10:56:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.983 10:56:03 -- paths/export.sh@5 -- $ export PATH 00:28:32.983 10:56:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.983 10:56:03 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:32.983 10:56:03 -- common/autobuild_common.sh@440 -- $ date +%s 00:28:32.983 10:56:03 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733223363.XXXXXX 00:28:32.983 10:56:03 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733223363.I3nZe3 00:28:32.983 10:56:03 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:28:32.983 10:56:03 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:28:32.983 10:56:03 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:32.983 10:56:03 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:32.983 10:56:03 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:32.983 10:56:03 -- common/autobuild_common.sh@456 -- $ get_config_params 00:28:32.983 10:56:03 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:28:32.983 10:56:03 -- common/autotest_common.sh@10 -- $ set +x 00:28:32.983 10:56:03 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:32.983 10:56:03 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:32.983 10:56:03 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:32.983 10:56:03 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:32.983 10:56:03 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:32.983 10:56:03 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:32.983 10:56:03 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:32.983 10:56:03 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:32.983 10:56:03 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:32.983 10:56:03 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:33.243 10:56:03 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:33.243 + [[ -n 4991 ]] 00:28:33.243 + sudo kill 4991 00:28:33.254 [Pipeline] } 00:28:33.271 [Pipeline] // timeout 00:28:33.278 [Pipeline] } 00:28:33.295 [Pipeline] // stage 00:28:33.302 [Pipeline] } 00:28:33.316 [Pipeline] // catchError 00:28:33.324 [Pipeline] stage 00:28:33.326 [Pipeline] { (Stop VM) 00:28:33.336 [Pipeline] sh 00:28:33.620 + vagrant halt 00:28:36.186 ==> default: Halting domain... 00:28:42.805 [Pipeline] sh 00:28:43.083 + vagrant destroy -f 00:28:45.621 ==> default: Removing domain... 00:28:46.203 [Pipeline] sh 00:28:46.485 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:28:46.495 [Pipeline] } 00:28:46.510 [Pipeline] // stage 00:28:46.516 [Pipeline] } 00:28:46.531 [Pipeline] // dir 00:28:46.536 [Pipeline] } 00:28:46.551 [Pipeline] // wrap 00:28:46.558 [Pipeline] } 00:28:46.571 [Pipeline] // catchError 00:28:46.580 [Pipeline] stage 00:28:46.583 [Pipeline] { (Epilogue) 00:28:46.596 [Pipeline] sh 00:28:46.880 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:51.079 [Pipeline] catchError 00:28:51.081 [Pipeline] { 00:28:51.093 [Pipeline] sh 00:28:51.376 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:51.376 Artifacts sizes are good 00:28:51.386 [Pipeline] } 00:28:51.399 [Pipeline] // catchError 00:28:51.409 [Pipeline] archiveArtifacts 00:28:51.416 Archiving artifacts 00:28:51.550 [Pipeline] cleanWs 00:28:51.562 [WS-CLEANUP] Deleting project workspace... 00:28:51.562 [WS-CLEANUP] Deferred wipeout is used... 00:28:51.570 [WS-CLEANUP] done 00:28:51.572 [Pipeline] } 00:28:51.586 [Pipeline] // stage 00:28:51.590 [Pipeline] } 00:28:51.603 [Pipeline] // node 00:28:51.608 [Pipeline] End of Pipeline 00:28:51.630 Finished: SUCCESS